category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
Laplace transform
|
Do signals with a Fourier transform with discontinuities or zero amplitude (in some frequencies) have Laplace transform?
|
https://dsp.stackexchange.com/questions/20075/do-signals-with-a-fourier-transform-with-discontinuities-or-zero-amplitude-in-s
|
<p>I am reading a book on Laplace transform, and in the section on the convergence of Laplace transform for various signals the following theorem is stated, without any proof :</p>
<p><em>If a signal's Fourier transform be zero in some frequencies, or have discontinuities, it will not have Laplace transform (like sinc function, or periodic signals)</em></p>
<p>I can not see the proof. For signals with a discontinuous Fourier transform I know that $\int_{-\infty}^{\infty} |x(t)|dt$ may not converge leading to discontinuities in the Fourier spectra (again, like sinc), but I don't think it will lead to not having a Laplace transform at all.</p>
<p>What is the proof for the above statement? (if correct) </p>
|
<p>First of all, it is important to distinguish between the unilateral and the bilateral Laplace transform. For causal signals we can use the unilateral Laplace transform. If for such a signal the Fourier transform exists, then also its Laplace transform exists. This is simply the case because here the Fourier transform is equivalent to the Laplace transform for $s=j\omega$, and if the Fourier integral converges, the Laplace transform must exist with a region of convergence $\Re\{s\}\ge a$, for some real-valued $a\le 0$.</p>
<p>If a two-sided signal is considered, we have to use the bilateral Laplace transform. Here the problem is that for $\Re\{s\}>0$ the signal is damped for $t>0$ but amplified for $t<0$ (and vice versa). Consequently, for two-sided signals with a constant envelop (e.g. sinusoids), the Laplace transform does not exist, even if the Fourier transform exists, because the Laplace integral cannot converge for signals with an exponentially increasing envelope. The same is true for all periodic signals (because they are sums of sinusoids). But even for two-sided signals which decay for increasing $|t|$ the bilateral Laplace transform might not exist if they decay too slowly, which is the case for the sinc function (decaying only as $1/t$).</p>
| 134
|
Laplace transform
|
How to compute the Laplace transform of a discrete signal?
|
https://dsp.stackexchange.com/questions/45030/how-to-compute-the-laplace-transform-of-a-discrete-signal
|
<p>Assume I have a discrete random signal, $f(t)$ for which I want to calculate the laplace transform. </p>
<p>How can I do it in matlab without using <code>sym</code> variables, for example consider I have this discrete signal <code>f(t)</code>:</p>
<pre><code>>> t=linspace(0,1000, 10000);
>> f=t.*cos(t);
</code></pre>
<p>Is there a way to calculate the Laplace numerically?</p>
<p>After thinking more about the problem I came up with this approach:</p>
<pre><code>t=linspace(0,1000, 10000);
f=t.*cos(t);
syms s;
F_s = symfun(sum(f.*exp(-s*t)), s);
ilaplace(F_s)
</code></pre>
<p>Though I am not sure it's plausible.</p>
|
<p>This requires use of the MATLAB symbolic toolbox</p>
<pre><code>>> syms x
>> f = x * cos(x);
>> t = linspace(0, 1000, 1000); % Or whatever values you want to evaluate the Laplace Transform over
>> L = double(laplace(f, t)); % Simulataneously compute the Transform and convert it from 'syms' to double
</code></pre>
<p>I guess this isn't exactly what you're asking for, as it requires keeping your function $f$ as a "symbolic expression". I dont know of any function which performs a "discrete Laplace tranform" though.</p>
<p>EDIT: Actually, I dont believe there is such thing as a Laplace Transform for discrete functions. However, the Laplace Transform is just a specific case of the z-Transform when $z = e^s$, which is definitely defined for discrete signals. Unfortunately I can't say much more about this relation between the two transforms, but hopefully this gives you a little more information about how to proceed from here. </p>
| 135
|
Laplace transform
|
Can I use Fourier transforms instead of Laplace transforms (analyzing RC circuit)?
|
https://dsp.stackexchange.com/questions/26775/can-i-use-fourier-transforms-instead-of-laplace-transforms-analyzing-rc-circuit
|
<p>I don't study electrical engineering or something related but I was assigned a problem on transfer functions, impulse responses, and in general, everything related to <a href="https://dsp.stackexchange.com/questions/536/what-is-meant-by-a-systems-impulse-response-and-frequency-response">this post</a>. (Specifically, I'm analyzing an RC circuit)</p>
<p>I do know the details of the Laplace transform but I prefer the Fourier transform. I wanted to know if there is any problem in substituting the Laplace transform for the Fourier transform in the analysis of LTI systems, impulse responses and transfer functions. </p>
<p>The important thing these transforms share is a Convolution theorem, which is highly relevant to the analysis of the things I've already mentioned above.</p>
<p>What do you think? What am I sacrificing by substituting transforms? Why is the Laplace transform favored over the Fourier transform?</p>
|
<p>Both transforms have a large overlap in their applications. So you can use both to analyze an RC circuit. However, with the unilateral Laplace transform it's much more straightforward to take initial conditions into account, such as an initially charged capacitor. This has to do with the unilateral Laplace transform of the derivative of a function:</p>
<p>$$\mathcal{L}\{x'(t)\}=s\mathcal{L}\{x(t)\}-x(0^-)$$</p>
<p>with the initial condition $x(0^-)$ occurring explicitly.</p>
<p>On the other hand, if you're interested in the behavior of the circuit when exposed to periodic input signals, then it's most natural to use the Fourier transform. Note that periodic signals (extending from $-\infty$ to $+\infty)$ don't have a Laplace transform.</p>
| 136
|
Laplace transform
|
How is the simplified version of the Bromwich inverse Laplace transform integral derived?
|
https://dsp.stackexchange.com/questions/41525/how-is-the-simplified-version-of-the-bromwich-inverse-laplace-transform-integral
|
<p>I do not understand how the last equality is derived from the previous.
Apparently the first term in the integral (involving $\mathrm{cos}$) is equivalent to the second (involving $\mathrm{sin}$)!! How so??</p>
<p>I DO understand how the integral range is halved (since $F(s)^*=F(s^*)$; where $F(s)$ is the Laplace transform of $f(t)$.
Any help would be appreciated since this form is used often in numerical inverse Laplace transform algorithms.</p>
<p>[Note: $\hat{f}(s)$ below represents the Laplace transform of $f(t)$]</p>
<p><a href="https://i.sstatic.net/RlLWY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RlLWY.png" alt="enter image description here"></a></p>
<p>This quote is from the web source <a href="http://www.columbia.edu/~ww2040/LaplaceInversionJoC95.pdf" rel="nofollow noreferrer">Abate and Whitt, 1995</a>.</p>
|
<p>I agree that the derivation is unclear, yet the final result is correct (for $t>0$, see below). There are two conditions that are necessary for the final result to be true:</p>
<ol>
<li>$f(t)$ is real-valued</li>
<li>$f(t)$ is causal</li>
</ol>
<p>The step from line 2 to line 3 in the derivation assumes that $f(t)$ is real-valued, i.e., only the real part of the integrand is considered. The last step leading to the final result assumes causality of $f(t)$, i.e., $f(t)=0$ for $t<0$.</p>
<p>From the third line in the derivation we have for real-valued $f(t)$</p>
<p>$$f(t)=\frac{e^{at}}{2\pi}\int_{-\infty}^{\infty}\left[\text{Re}\left(\hat{f}(a+iu)\right)\cos(ut)-\text{Im}\left(\hat{f}(a+iu)\right)\sin(ut)\right]du\tag{1}$$</p>
<p>and, consequently,</p>
<p>$$f(-t)e^{2at}=\frac{e^{at}}{2\pi}\int_{-\infty}^{\infty}\left[\text{Re}\left(\hat{f}(a+iu)\right)\cos(ut)+\text{Im}\left(\hat{f}(a+iu)\right)\sin(ut)\right]du\tag{2}$$</p>
<p>Since for causal $f(t)$ we have $f(-t)=0$ for $t>0$, we can write</p>
<p>$$f(t)=f(t)+f(-t)e^{2at},\qquad t>0\tag{3}$$</p>
<p>Consequently, adding $(1)$ and $(2)$ gives</p>
<p>$$f(t)=\frac{e^{at}}{\pi}\int_{-\infty}^{\infty}\text{Re}\left(\hat{f}(a+iu)\right)\cos(ut)du,\qquad t>0\tag{4}$$</p>
<p>And since for real-valued $f(t)$ the integrand in $(4)$ is even, we finally obtain</p>
<p>$$f(t)=\frac{2e^{at}}{\pi}\int_{0}^{\infty}\text{Re}\left(\hat{f}(a+iu)\right)\cos(ut)du,\qquad t>0\tag{5}$$</p>
<p>q.e.d.</p>
<p>Note that this is result is only valid for $t>0$, which is not stated in the paper you quoted. Of course, for $t<0$ we have $f(t)=0$.</p>
<p>Also note that instead of $(3)$ we could have written</p>
<p>$$f(t)=f(t)-f(-t)e^{2at},\qquad t>0\tag{6}$$</p>
<p>from which we can conclude that</p>
<p>$$f(t)=-\frac{e^{at}}{\pi}\int_{-\infty}^{\infty}\text{Im}\left(\hat{f}(a+iu)\right)\sin(ut)du,\qquad t>0\tag{7}$$</p>
<p>and, taking into account that for real-valued $f(t)$ the integrand is even, we get</p>
<p>$$f(t)=-\frac{2e^{at}}{\pi}\int_{0}^{\infty}\text{Im}\left(\hat{f}(a+iu)\right)\sin(ut)du,\qquad t>0\tag{8}$$</p>
<p>Comparing $(5)$ with $(8)$ we see the equivalence</p>
<p>$$\int_{0}^{\infty}\text{Re}\left(\hat{f}(a+iu)\right)\cos(ut)du=-\int_{0}^{\infty}\text{Im}\left(\hat{f}(a+iu)\right)\sin(ut)du\qquad t>0\tag{9}$$</p>
| 137
|
Laplace transform
|
Why are we still using Continuous Time Fourier Transform when we have Laplace Transform?
|
https://dsp.stackexchange.com/questions/36709/why-are-we-still-using-continuous-time-fourier-transform-when-we-have-laplace-tr
|
<p>I've read that <strong>Laplace Transform</strong> is more versatile and can cover a broad range of signals compared to <strong>Continuous Time Fourier Transform</strong>. Then why are we still using <strong>Continuous Time Fourier Transform</strong> ?</p>
| 138
|
|
Laplace transform
|
Laplace Transform of Cosine, Poles and Mapping to Frequency Domain
|
https://dsp.stackexchange.com/questions/37265/laplace-transform-of-cosine-poles-and-mapping-to-frequency-domain
|
<p>I am trying to understand the connection between Laplace transform ($s$-plane), and frequency domain calculation.</p>
<p>Let's take the Fourier transform of $\cos(\omega_0t)$, which equals to $\pi[\delta(\omega - \omega_0) + \delta(\omega + \omega_0)]$. So clearly the frequency domain has only two non-zero values at two particular frequencies, and others are zero. Fine!</p>
<p>Now lets do the Laplace transform of the same function: $\cos(\omega_0t)$, which gives me $F(s) = \frac{s}{s^2 + \omega_0^2}=\frac{s}{(s+j\omega_0)(s-j\omega_0)}$.</p>
<p><a href="https://i.sstatic.net/9RPA2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9RPA2.png" alt="enter image description here"></a></p>
<p>So there are two poles on the $y$-axis ($j\omega$ axis) as in above plot; $\beta$ is $\omega_0$ here. Now if I move upwards from the origin, I am staying on the frequency axis, and for every point except the pole locations, I am getting a non-zero value [from $F(s)]$, whereas only the pole locations should give me non-zero values (as can be seen from the fourier transform), and zero for all others points.</p>
<p>But this is not happening here. Where exactly I am going wrong here.</p>
<p>Your guidance will be greatly appreciated.</p>
|
<p>You're comparing the transforms of two different functions. You consider the Fourier transform of the function $x_1(t)=\cos(\omega_0 t)$, but you took the Laplace transform of the function $x_2(t)=\cos(\omega_0t)u(t)$, where $u(t)$ is the unit step function:</p>
<p>$$X_1(j\omega)=\int_{-\infty}^{\infty}x_1(t)e^{-j\omega t}dt\\
X_2(s)=\int_{0}^{\infty}x_2(t)e^{-st}dt=\int_{0}^{\infty}x_1(t)e^{-st}dt$$</p>
<p>Note the difference in the lower integration limits.</p>
<p>The (bilateral) Laplace transform of the function $x_1(t)=\cos(\omega_0t)$, $-\infty<t<\infty$, does not exist, whereas the Fourier transform of the function $x_2(t)=\cos(\omega_0t)u(t)$ does exist:</p>
<p>$$\mathcal{F}\{\cos(\omega_0t)u(t)\}=\frac{\pi}{2}[\delta(\omega-\omega_0)+\delta(\omega+\omega_0)]+\frac{j\omega}{\omega_0^2-\omega^2}\tag{1}$$</p>
<p>However, it cannot be obtained by simply replacing $s$ by $j\omega$ in the expression for $X_2(s)$ because $X_2(s)$ has poles on the imaginary axis, and replacing $s$ by $j\omega$ only gives the correct expression for the Fourier transform if the imaginary axis is inside the region of convergence.</p>
<p>Note that in general for <em>causal</em> functions there are three cases concerning the relationship between the Laplace transform and the Fourier transform:</p>
<ol>
<li><p>If the region of convergence (ROC) contains the $j\omega$-axis (i.e., all poles are in the left half-plane), the Fourier transform is simply obtained by the substitution $s=j\omega$.</p></li>
<li><p>If there are poles on the $j\omega$-axis, but no poles in the right half-plane, then the Fourier transform contains Dirac delta impulses (as in the example above) plus a term that can be obtained from the Laplace transform by setting $s=j\omega$. Note that in $(1)$ the right-most term is simply $X_2(j\omega)$.</p></li>
<li><p>If there are poles in the right half-plane, the Fourier transform of the corresponding causal signal does not exist.</p></li>
</ol>
| 139
|
Laplace transform
|
Why Fourier transform is not sufficient and we have to use Laplace transform?
|
https://dsp.stackexchange.com/questions/32357/why-fourier-transform-is-not-sufficient-and-we-have-to-use-laplace-transform
|
<ul>
<li><p>Is there an easy way to explain the motivation behind the use of Laplace transform instead of Fourier transform?</p></li>
<li><p>Isn't that any periodic function can be represented by sines and cosines? - Why to introduce exponential idea? </p></li>
<li>Why not using differential equations with Fourier transform? An example would help.</li>
</ul>
<p>*Asked the same question a while ago at math.stackexchange but no answers given.</p>
|
<p>The Laplace Transform is more representative of real systems that have a starting point, which is why the integral starts at 0, and also why the unit step function is generally talked about alongside the Laplace Transform. With the Laplace Transform, we can examine the transient and steady-state behavior of a system.</p>
<p>Using $e^{st}$ instead of $e^{iwt}$ allows us to examine different aspects of a physical system. The variable $s$ is complex, and if the real part was set to 0, it would reduce to a truncated Fourier Transform. The real part of $s$ is related to the amount of damping in the system. Also, with the Laplace Transform, a system's stability can be considered. </p>
<p>In short, Laplace is used to consider damping, stability, transient and steady-state behavior of a physical system (represented by a differential equation). </p>
| 140
|
Laplace transform
|
Why do singularities on the imaginary axis affect the Fourier transform differently than the Laplace transform?
|
https://dsp.stackexchange.com/questions/91712/why-do-singularities-on-the-imaginary-axis-affect-the-fourier-transform-differen
|
<p>(Please note that I'm aware there are already several questions asking about the difference between the two transforms. However, none of them that I could find touch on this specific issue of the affect of singularities.)</p>
<p>I was reading <a href="https://dsp.stackexchange.com/a/15356/56502">this answer</a> which says that if,</p>
<blockquote>
<p>the region of convergence is <span class="math-container">$Re\{s\}>0$</span> but there are singularities on the <span class="math-container">$j\omega$</span> axis [, then] both transforms exist but they have different forms. <strong>The Fourier transform has additional delta impulses.</strong> Consider the function <span class="math-container">$f(t)=e^{j\omega_0 t}u(t)$</span>. From (1), its Laplace transform is given by</p>
<p><span class="math-container">$$F(s)=\frac{1}{s-j\omega_0}$$</span>
However, due to the singularity on the <span class="math-container">$j\omega$</span> axis, its Fourier transform is</p>
<p><span class="math-container">$$F(j\omega)=\pi\delta(\omega-\omega_0)+\frac{1}{j\omega-j\omega_0}$$</span></p>
</blockquote>
<p>The part I bolded is what has me confused. If I understand correctly, the Fourier transform integrates over the entire real number line and is the imaginary part of the the Laplace transform for functions which are zero for negative inputs (since Laplace integrates only on the nonnegative part of the real number line). So it seems like, for the example function in the above quote, Laplace and Fourier should have the same result when <span class="math-container">$s=j\omega$</span>. Why does the singularity on the imaginary axis mean this isn't true? I don't understand why it adds a delta pulse to the Fourier transform but not the Laplace transform.</p>
|
<p>The Laplace transform of a function <span class="math-container">$f(t)$</span> is defined as:</p>
<p><span class="math-container">$$F(s) = \int_{0^-}^{\infty} e^{-st} f(t) \, dt,$$</span></p>
<p>where <span class="math-container">$s$</span> is a complex variable <span class="math-container">$s = \sigma + j\omega$</span>, and the region of convergence (ROC) is the set of <span class="math-container">$s$</span> values for which this integral converges. Also the lower limit <span class="math-container">$0^-$</span> is shorthand notation for:</p>
<p><span class="math-container">$$\lim_{\epsilon \to 0^+} \int_{-\epsilon}^{\infty} \cdot$$</span></p>
<blockquote>
<p>This limit emphasizes that any point mass located at 0 is entirely captured by the <a href="https://en.m.wikipedia.org/wiki/Laplace_transform" rel="nofollow noreferrer">Laplace Transform</a>.</p>
</blockquote>
<p>The Fourier transform of a function <span class="math-container">$f(t)$</span> is defined as:</p>
<p><span class="math-container">$$F(j\omega) = \int_{-\infty}^{\infty} e^{-j\omega t} f(t) \, dt.$$</span></p>
<p>For functions that are zero for <span class="math-container">$t < 0$</span>, you can relate the Fourier transform to the Laplace transform by setting <span class="math-container">$s = j\omega$</span>. However, the existence of the Fourier transform requires that the integral converges <strong>absolutely</strong>, which is a <em>stricter</em> condition than for the Laplace transform.</p>
<p>In the case you mentioned, the function <span class="math-container">$f(t)=e^{j\omega_0 t}u(t)$</span>, where <span class="math-container">$u(t)$</span> is the unit step function, has a Laplace transform that is given by:</p>
<p><span class="math-container">$$F(s) = \int_{0^-}^{\infty} e^{-st} e^{j\omega_0 t} u(t) \, dt = \int_{0^-}^{\infty} e^{-(s-j\omega_0)t} \, dt = \frac{1}{s-j\omega_0},$$</span></p>
<p>provided that <span class="math-container">$Re\{s\} > 0$</span>, which ensures the convergence of the integral.</p>
<p>However, when you try to find the Fourier transform by setting <span class="math-container">$s = j\omega$</span>, you're evaluating the integral at the singularity, which is the point <span class="math-container">$s = j\omega_0$</span>. The Fourier transform integral does not converge in the conventional sense because of this singularity. The presence of the singularity means you can't directly apply the same formula you use for the Laplace transform to find the Fourier transform.</p>
<p>In such cases, <a href="https://en.m.wikipedia.org/wiki/Distribution_(mathematics)" rel="nofollow noreferrer">distribution</a> theory or generalized functions are used to interpret the Fourier transform. When you approach the singularity from the context of distributions, you end up with an additional term, which is the delta function, to account for the energy concentrated at the frequency <span class="math-container">$\omega_0$</span>. The delta function, <span class="math-container">$\delta(\omega-\omega_0)$</span>, represents an impulse at the frequency <span class="math-container">$\omega_0$</span>. Hence, the Fourier transform includes a term <span class="math-container">$\pi\delta(\omega-\omega_0)$</span>, which captures the singularity's effect on the transform, along with the principal value of the integral around the singularity, <span class="math-container">$\frac{1}{j\omega-j\omega_0}$</span>.</p>
| 141
|
Laplace transform
|
Is the Laplace transform a special case of Fourier transform? (Not the other way around)
|
https://dsp.stackexchange.com/questions/64624/is-the-laplace-transform-a-special-case-of-fourier-transform-not-the-other-way
|
<p>Always had a thought about why Laplace transform reveals the transient properties of the system?
My doubt is based on the following fact,
Fourier transform is given as </p>
<p><span class="math-container">\begin{equation}
\mathscr{F}\left\lbrace f(t)\right\rbrace = \int_{-\infty}^\infty f(t) e^{ -j \omega t} dt
\end{equation}</span></p>
<p>Where Mathematically and intuitively we believe that the angular frequency <span class="math-container">$\omega$</span>
takes only real value. </p>
<p>What if, instead of taking real angular frequencies, if the variable
<span class="math-container">$\omega$</span>
assumes a complex angular frequency in the form
<span class="math-container">$\beta - j \alpha$</span>
, then,</p>
<p><span class="math-container">$$
j \omega t = j (\beta - j \alpha) t = (\alpha + j \beta ) t = s t
$$</span></p>
<p>While taking Fourier transform w.r.t <span class="math-container">$\omega$</span>, the quantity <span class="math-container">$\beta$</span> will be real angular frequency in radians per second and <span class="math-container">$\alpha$</span> will be the <span class="math-container">$\textbf{imaginary angular }$</span> frequency in radians per second.</p>
<p><span class="math-container">\begin{equation}
\int_{-\infty}^\infty f(t) e^{ -j \omega t} dt = \int_{-\infty}^\infty f(t) e^{ - s t} dt = \mathscr{L}\left\lbrace f(t)\right\rbrace
\end{equation}</span></p>
<p>Hence is it mathematically correct to consider bilateral Laplace transform as a special case of Fourier transform (not the other way around) when <span class="math-container">$\omega$</span> takes a complex angular form <span class="math-container">$\beta - j \alpha$</span> ? I believe the fact that <span class="math-container">$\omega$</span> can take complex values is the reason why we get transient properties of the system when using Laplace transform. </p>
|
<p>The Fourier Transform is the Laplace Transform with the complex variable s restricted to be the imaginary axis on the s plane. For this reason the Fourier Transform only exists when the imaginary axis is within the region of convergence. The variable s is called a "complex frequency" as it is the frequency variable that can take on real (<span class="math-container">$\sigma$</span>) and imaginary (<span class="math-container">$\omega$</span>) components. That said, I would view the Fourier Transform as a subset of the Laplace Transform, or the Laplace Transform as an expansion on the Fourier Transform that provides a lot more functionality and can exist when the Fourier Transform can't. </p>
<p>This is also the reason that the frequency response for a system with a general transfer function <span class="math-container">$H(s)$</span> is given as <span class="math-container">$H(j\omega)$</span>. </p>
<p>When a system is restricted to <span class="math-container">$s= j\omega$</span> as the input, then the input is restricted to be only sinusoids or signals given by <span class="math-container">$e^{st}$</span> with <span class="math-container">$s = j\omega$</span> which maintain a constant magnitude with time. By allowing s to have real and imaginary components as in <span class="math-container">$s = \sigma + j\omega$</span> then we also allow the input to grow or shrink with time, depending on which point in the s plane is used as the input to the system. </p>
| 142
|
Laplace transform
|
Laplace Transform of $f(t+a), a>0$ where $f(t)$ is not periodic
|
https://dsp.stackexchange.com/questions/41211/laplace-transform-of-fta-a0-where-ft-is-not-periodic
|
<p>For $a > 0$, is there any known representation of the Laplace transform of $f(t+a)$ in terms of the Laplace Transform of $f(t) $</p>
<p>Note: In my application, $f(t)$ is not a periodic function and the functional form of $f(t)$ is not actually known a-priori, because I have to couple it to another set of equations.</p>
|
<p>Let $s = \sigma + j\omega$, the inverse Laplace transform of $f(t+a)$ is given by
$$f(t+a) = \frac{1}{2\pi j} \int_{\sigma-j\infty}^{\sigma+j\infty} F(s)e^{s(t+a)} \mathrm{d}s = \frac{1}{2\pi j} \int_{\sigma-j\infty}^{\sigma+j\infty} F(s)e^{sa}e^{st} \mathrm{d}s.$$</p>
<p>Hence the <strong>bilateral</strong> Laplace transform of $f(t+a)$ is $F(s)e^{sa}$ where $F(s)$ is the Laplace transform of $f(t)$. For the unilateral case, see <em>Matt L.</em> answer.</p>
<hr>
<p>This is sometimes called the <strong>shifting theorem</strong>. <a href="http://mathfaculty.fullerton.edu/mathews/c2003/LaplaceShiftingMod.html" rel="nofollow noreferrer">See Theorem 12.16 here.</a></p>
| 143
|
Laplace transform
|
Linear linearly time varying systems Laplace transform
|
https://dsp.stackexchange.com/questions/87456/linear-linearly-time-varying-systems-laplace-transform
|
<p>Suppose that for a system <span class="math-container">$S$</span> if we have <span class="math-container">$t_{2} = t_{1}+t_{0}\rightarrow h(t,t_{2}) =h(t,t_{1})+h(t,t_{0}) $</span> .Then if we take the double Laplace transform to<span class="math-container">$t,t_{2}$</span> we will get:</p>
<p><span class="math-container">$$L_{t_{2}}(L_{t}(h(t,t_{2})) = L_{t_{2}}(L_{t}(h(t,t_{1}))+L_{t_{2}}(L_{t}(h(t,t_{0})).$$</span>
But
<span class="math-container">\begin{align*}
t_{2} &= t_{1}+t_{0}\\
&\rightarrow \\
L_{t_{2}}(L_{t}(h(t,t_{2})) &= L_{t_{1}+t_{0}}(L_{t}(h(t,t_{2}))\\
&\rightarrow\\
L_{t_{2}}(L_{t}(h(t,t_{2})) &= L_{t_{0}+t_{1}}(L_{t}(h(t,t_{0}))+L_{t_{1}+t_{0}}(L_{t}(h(t,t_{1})),
\end{align*}</span> but the Laplace operator is a linear operator which means</p>
<p><span class="math-container">\begin{align*}
L_{t_{0}+t_{1}}(L_{t}(h(t,t_{0})) &= L_{t_{0}}(L_{t}(h(t,t_{0}))+L_{t_{1}}(L_{t}(h(t,t_{0}))\\
\text{and}\\
L_{t_{0}+t_{1}}(L_{t}(h(t,t_{1})) &= L_{t_{0}}(L_{t}(h(t,t_{1}))+L_{t_{1}}(L_{t}(h(t,t_{1})),
\end{align*}</span>
but <span class="math-container">$t_{0}$</span> and <span class="math-container">$ t_{1}$</span> are independent variables which means that <span class="math-container">$L_{t_{0}}(L_{t}(h(t,t_{1})) = 0$</span> and <span class="math-container">$L_{t_{1}}(L_{t}(h(t,t_{0})) =0$</span>.</p>
<p>So we are left with:</p>
<p><span class="math-container">\begin{align*}
L_{t_{2}}(L_{t}(h(t,t_{2})) &= L_{t_{1}}(L_{t}(h(t,t_{1}))+L_{t_{0}}(L_{t}(h(t,t_{0}))\\
\rightarrow \\
H(s,s_{2}) &= H(s,s_{1})+H(s,s_{0})
\end{align*}</span></p>
<p>Is that correct?</p>
| 144
|
|
Laplace transform
|
How can we prove the correctness of the integration property of the Laplace transform?
|
https://dsp.stackexchange.com/questions/72903/how-can-we-prove-the-correctness-of-the-integration-property-of-the-laplace-tran
|
<p>I was going through an Electrical Engineering textbook for understanding the Laplace transform and came across the following proof for one of the properties of the Unilateral Laplace transform.</p>
<p>Integration property of the unilateral Laplace transform:
<a href="https://i.sstatic.net/sjRS6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sjRS6.png" alt="enter image description here" /></a></p>
<p>In the proof, it is stated that:</p>
<p><span class="math-container">$$ e^{-st} \rightarrow 0 \mbox{ as } t \rightarrow \infty \ \ \ (i)$$</span></p>
<p>and therefore the term:</p>
<p><span class="math-container">$$ -\frac{e^{-st}}{s}\int_{0-}^{t}{f(\tau) \ d\tau} = 0 \mbox{, as } t \rightarrow \infty \ \ \ (ii)$$</span></p>
<p>But my doubt is: Isn't there a case where the real part of <span class="math-container">$s$</span>, gets cancelled by the function obtained after the integration of <span class="math-container">$f(\tau)$</span>?</p>
<p>If that happens then we can't guarantee that (ii) would hold true right - i.e. the LHS in (ii) would not be zero right?</p>
<p>So, isn't the proof that has been provided wrong?</p>
|
<p>You are right that the argument in the proof is not correct, or at least misleading. The fact that <span class="math-container">$e^{-st}$</span> becomes zero as <span class="math-container">$t\to\infty$</span> is true for any <span class="math-container">$s$</span> with <span class="math-container">$\textrm{Re}\{s\}>0$</span>. However, the integral</p>
<p><span class="math-container">$$\int_0^{\infty}f(\tau)d\tau$$</span></p>
<p>might not exist, so we can't just compute the limit by claiming that the first term becomes zero.</p>
<p>We have to compute</p>
<p><span class="math-container">$$\lim_{t\to\infty}-\frac{e^{-st}}{s}\int_0^{t}f(\tau)d\tau\tag{1}$$</span></p>
<p>The limit <span class="math-container">$(1)$</span> only equals zero for <span class="math-container">$s$</span> inside the region of convergence (ROC) of <span class="math-container">$f(t)$</span>, i.e., we don't just require that <span class="math-container">$e^{-st}\to 0$</span> for <span class="math-container">$t\to\infty$</span>, but we require it to decay sufficiently fast to compensate for the growth of the integral.</p>
| 145
|
Laplace transform
|
Laplace transform of a finite duration signal
|
https://dsp.stackexchange.com/questions/59694/laplace-transform-of-a-finite-duration-signal
|
<p>Consider the following signal:
<span class="math-container">$$ x(t) = e^{-2t}[u(t) - u(t-5)] $$</span></p>
<p>This signal exists only from 0 to 5 time units. Elsewhere, it is zero.</p>
<p>Now, let's find the laplace transform of this signal using Linearity and Time shift properties.</p>
<p><span class="math-container">$$ e^{-2t}u(t) \leftrightarrow \frac{1}{s+2} \ , \ \ Re \{s \} > -2 $$</span>
Also,
<span class="math-container">$$ e^{-2(t-5)}u(t-5) \leftrightarrow \frac{e^{-5s}}{s+2} \ , \ \ Re \{s \} > -2 $$</span>
<span class="math-container">$$ \Rightarrow e^{-2t}u(t-5) \leftrightarrow \frac{e^{-5s}e^{-10}}{s+2} \ , \ \ Re \{s \} > -2 $$</span></p>
<p>Thus, by linearity property,
<span class="math-container">$$ e^{-2t}[u(t) - u(t-5)] = \frac{1 - e^{-5(s+2)}}{s+2} \ , \ \ Re \{s \} > -2$$</span></p>
<p>Note: The time shifting property doesn't alter the ROC;</p>
<p>However, the textbooks that i am refering (Oppenheim and Schaum series) both tell that the ROC of a finite duration signal is the entire S-plane, possibly zero and infinity (in some cases).</p>
<p>But the above signal being of finite-duration, possess ROC that is not the entire s-plane. Please help me figure this conceptual error. </p>
<p>Note: The above problem is from Schaum series. Here are the images of the textbook's section relevant to the above question.</p>
<p>Source of the Question and its solution:
<a href="https://i.sstatic.net/DdMcY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdMcY.png" alt="enter image description here"></a></p>
<p>Property of finite duration signals: </p>
<p>In Schaum's outline series:
<a href="https://i.sstatic.net/TEuSS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TEuSS.png" alt="In Schaum's outline series"></a>
In oppenheim:
<a href="https://i.sstatic.net/lfsnF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lfsnF.png" alt="In oppenheim"></a></p>
|
<p>The property claimed by Schaum and Oppenheim is also true for the given example. Note that the Laplace transform</p>
<p><span class="math-container">$$X(s)=\frac{1-e^{-5(s+2)}}{s+2}\tag{1}$$</span></p>
<p>has <em>no</em> pole at <span class="math-container">$s=-2$</span>:</p>
<p><span class="math-container">$$\lim_{s\to -2}X(s)=\frac{1-(1-5(s+2))}{s+2}\Big{|}_{s=-2}=5$$</span></p>
<p>So the ROC is indeed the entire <span class="math-container">$s$</span>-plane. Even though the ROCs of the two individual signals in your solution have the same right half-plane as ROC, their sum has the entire <span class="math-container">$s$</span>-plane as ROC because the two signals cancel everywhere except in a finite interval.</p>
| 146
|
Laplace transform
|
Laplace transform of a time domain sampled data MATLAB
|
https://dsp.stackexchange.com/questions/45287/laplace-transform-of-a-time-domain-sampled-data-matlab
|
<p>I have two sets of one second voltage data sampled with 4000Hz and I can plot all the voltage points vs time points in MATLAB. So it means I have a data matrix with with length of 4000 one column for the time in seconds the other for the voltage. </p>
<p>Now I have simultaneously sampled two data matrix in time domain with this way. One is input to a filter (Vin,t) the other is output (Vout,t). I want to find the transfer function both for amplitude and phase shift. </p>
<p>How can I take the Laplace transform of this data? Can I do it without converting it to a expressible function like poly-fit? </p>
|
<p>Since you are sampling a real signal, you can just use the a Fourier transform, the fft function in MATLAB. You can divide the FFT of the output by the input, and then fit a curve to the result to approximate the transfer function</p>
| 147
|
Laplace transform
|
The Laplace transform - Steven W. Smith Book - impulse response cancellation method
|
https://dsp.stackexchange.com/questions/80628/the-laplace-transform-steven-w-smith-book-impulse-response-cancellation-met
|
<p>While studying the Laplace transform using <a href="http://www.dspguide.com/pdfbook.htm" rel="nofollow noreferrer">Steven W. Smith Book</a> I found something uncomprehending. In the 32th chapter - The Laplace Transform, page 590, last paragraph describes the cancelling phenomena when an impulse response is cancelled using an exponentially weighted sinusoid (see picture below). When cancelling occurs then we are dealing with zero or pole at the s-plane. What is not clear for are the products of the probing waveform and impulse response examples (3rd column in the figure below):</p>
<p>a) <strong>Decreasing with time</strong>: how it can be said that <span class="math-container">$p(t) \times h(t)$</span> is finite?<br />
b) <strong>Exact cancellation (zero)</strong>: how it can be said that <span class="math-container">$p(t) \times h(t)$</span> is zero?<br />
c) <strong>Too slow of increase</strong>: how it can be said that <span class="math-container">$p(t) \times h(t)$</span> is finite?<br />
d) <strong>Exact cancellation (pole)</strong>: how it can be said that <span class="math-container">$p(t) \times h(t)$</span> is infinite?<br />
e) <strong>Too fast of increase</strong>: how it can be said that <span class="math-container">$p(t) \times h(t)$</span> is undefinied?</p>
<p>I would be glad if someone could explain me what is the connection between <span class="math-container">$p(t) \times h(t)$</span> shape and if it is pole or zero.</p>
<h2><a href="https://i.sstatic.net/1veWW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1veWW.png" alt="enter image description here" /></a></h2>
|
<p>Peter's comment is correct, it's about the integral of the product <span class="math-container">$p(t)h(t)$</span>:</p>
<p><span class="math-container">$$I=\int_{-\infty}^{\infty}p(t)h(t)dt\tag{1}$$</span></p>
<p>The impulse response <span class="math-container">$h(t)$</span> has the following form:</p>
<p><span class="math-container">$$h(t)=\delta(t)+c_1\, e^{\sigma t}\cos(\omega_0t),\qquad t\ge 0\tag{2}$$</span></p>
<p>with some constant <span class="math-container">$c_1$</span> and some <span class="math-container">$\sigma<0$</span>.</p>
<p>From what I understand, the function <span class="math-container">$p(t)$</span> must look like</p>
<p><span class="math-container">$$p(t)=c_2\,e^{\sigma_pt}\cos(\omega_0t)\tag{3}$$</span></p>
<p>with some constant <span class="math-container">$c_2$</span>. I can't be sure but it seems likely that <span class="math-container">$c_1=c_2$</span>.</p>
<p>Now we consider <span class="math-container">$5$</span> cases:</p>
<ol>
<li><span class="math-container">$\sigma_p<0$</span></li>
<li><span class="math-container">$\sigma_p=0$</span></li>
<li><span class="math-container">$\sigma_p>0$</span> and <span class="math-container">$\sigma+\sigma_p<0$</span></li>
<li><span class="math-container">$\sigma_p=-\sigma$</span></li>
<li><span class="math-container">$\sigma_p>0$</span> and <span class="math-container">$\sigma+\sigma_p>0$</span></li>
</ol>
<p>In the first three cases we obtain for the product <span class="math-container">$p(t)h(t)$</span> a decaying exponential times <span class="math-container">$\cos^2(\omega_0t)$</span>, plus a Dirac impulse, the integral of which is finite in all cases. In the second case, the exponent equals <span class="math-container">$\sigma$</span>, and it appears that the constants can be chosen in such a way that the value of the integral can be made zero. I don't see any clear explanation of this in the book chapter, but I might be missing something.</p>
<p>In cases <span class="math-container">$4$</span> and <span class="math-container">$5$</span>, the exponential is constant (<span class="math-container">$4$</span>) or growing indefinitely (<span class="math-container">$5$</span>), hence the integral diverges in both cases.</p>
| 148
|
Laplace transform
|
Is it possible to take Fractional Fourier transform of Laplace transform?
|
https://dsp.stackexchange.com/questions/95963/is-it-possible-to-take-fractional-fourier-transform-of-laplace-transform
|
<p>Let <span class="math-container">$L_t\{f(x, t)\}$</span> denotes the Laplace transform (two-sided) of <span class="math-container">$f(x,t)$</span> with respect to <span class="math-container">$t$</span>. That is,</p>
<p><span class="math-container">$L_t\{f(x, t)\}(s)=\int_{-∞}^{+∞}f(x, t) e^{-st} dt$</span></p>
<p>and Fractional Fourier transform of <span class="math-container">$f(x,t)$</span> with respect to <span class="math-container">$x$</span> (for <span class="math-container">$\alpha≠0,π/2,π$</span>) is given by:</p>
<p><span class="math-container">$F_x\{f(x, t)\}(u)$</span>
<span class="math-container">$=\sqrt{\frac{1-i\cot\alpha}{2π}}\int_{-∞}^{+∞}f(x,t)e^{i\frac{\cot\alpha}{2}[x^2+u^2-2xu\sec\alpha]} dx$</span></p>
<p>Now, <strong>can we consider</strong> Fractional Fourier transform of <span class="math-container">$L_t\{f(x, t)\}(s)$</span> with respect to <span class="math-container">$x$</span> <strong>?</strong></p>
<p>I mean, can we consider</p>
<p><span class="math-container">$F_x\{L_t\{f(x, t)\}(s)\}(u)$</span>?</p>
<p>Which is,
<span class="math-container">$=\sqrt{\frac{1-i\cot\alpha}{2π}}\int_{-∞}^{+∞}L_t\{f(x, t)\}(s) e^{i\frac{\cot\alpha}{2}[x^2+u^2-2xu\sec\alpha]}$</span></p>
<p>(I am confused, because to consider it, we must have <span class="math-container">$L_t\{f(x, t)\}(s)$</span> as a function of <span class="math-container">$x$</span> and are there any more requirements?)</p>
<p>Update: Moreover, Usually in case of FRFT, we take functions from some space <span class="math-container">$W$</span> integrable functions such that, if <span class="math-container">$f$</span> is in <span class="math-container">$W$</span> then fractional Fourier transform of <span class="math-container">$f$</span> is also in <span class="math-container">$W$</span>. So in accordance with this, what conditions must be satisfied by <span class="math-container">$L_t\{f(x, t)\}(s)$</span> for the convergence of <span class="math-container">$F_x\{(L_t\{f(x, t)\}(s)\}(u)$</span>?</p>
<p>Please help.</p>
| 149
|
|
Laplace transform
|
Connection from Fourier to Laplace Transform
|
https://dsp.stackexchange.com/questions/78924/connection-from-fourier-to-laplace-transform
|
<p>I have a basic understanding of Laplace and Fourier but having trouble making a connection. Every time I attempt to look at reasons these are connected I'm told about the s-plane and regions of convergence which confuse me quite a lot trying to wrap my head around. What I would like to know is whether my understanding of Fourier is correct, with respect to phasors, and how I can connect this to Laplace.</p>
<p>So I understand that Fourier means taking your original signal and putting it into the frequency domain such as this figure here. <a href="https://i.sstatic.net/sTfBl.png" rel="noreferrer"><img src="https://i.sstatic.net/sTfBl.png" alt="enter image description here" /></a></p>
<p>Well I would like to then make this into a phasor which means I take a specific slice at a specific frequency such as this diagram<a href="https://i.sstatic.net/4zLNb.jpg" rel="noreferrer"><img src="https://i.sstatic.net/4zLNb.jpg" alt="enter image description here" /></a></p>
<p>And then, because this is a sinusoid, I would plot it as a component on the imaginary vs time plane of the real-complex plane, as shown. <a href="https://i.sstatic.net/GH9C0.jpg" rel="noreferrer"><img src="https://i.sstatic.net/GH9C0.jpg" alt="enter image description here" /></a>
Because I would like to make a phasor out of this, I therefore will stop at the same measurement, w, or frequency that I took my sinusoid from which will give me a phase angle to stop my rotation at. Stopping at this rotation will then give me a phasor such as this figure. <a href="https://i.sstatic.net/RzOWu.gif" rel="noreferrer"><img src="https://i.sstatic.net/RzOWu.gif" alt="enter image description here" /></a></p>
<p>What I need to understand is whether my intuition is correct in the connection from Fourier to phasors. Then how this connects Fourier to Laplace based on this image right here. I can see how you take multiple Fourier Transforms at different sigma's to create a Laplace, but I am having trouble discerning doing this from taking multiple sinusoids to make a Fourier. <a href="https://i.sstatic.net/vY1h7.jpg" rel="noreferrer"><img src="https://i.sstatic.net/vY1h7.jpg" alt="enter image description here" /></a></p>
|
<p>Yes, the OP’s intuition for Fourier seems correct and more specifically I call attention to the basic "correlation" structure and add more meaning to what <span class="math-container">$e^{j\omega t}$</span> is as the true "single frequency component" in signal processing (as opposed to using a sine or cosine, which for each consists of two frequency "tones": a positive and negative frequency.). I detail all this in this recent post specific to the Fourier Transform and its intuition, so won't repeat that here but recommend reading it before continuing with Laplace below:</p>
<p><a href="https://dsp.stackexchange.com/questions/78906/qualitative-explanation-of-fourier-transform/78911?noredirect=1#comment167118_78911">Qualitative Explanation of Fourier Transform</a></p>
<p>With that behind us, we can move onto Laplace. Let's first provide some basic motivations for Laplace, and then we can see further into its intuition.</p>
<p>In signal processing we often use the Laplace Transform for practical applications specific to the response of linear time-invariant systems (which includes two port frequency selective filters). We also refer to the impulse response of the system (in the time domain) which is related directly to the frequency response of the system (in the frequency domain) through use of the Fourier Transform. I demonstrate this in the graphic below:</p>
<p><a href="https://i.sstatic.net/Ln96g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ln96g.png" alt="Impulse Response" /></a></p>
<p>Thus we can determine any output by knowing the impulse response, but doing a convolution isn't as fun as doing a multiplication- so we can convert the time domain input waveform to the frequency domain (Fourier or Laplace) and multiply that by the frequency response (Fourier Transform of the Impulse Response) or transfer function (Laplace Transform of the impulse response) of the system. This is using the property that convolution in the time domain is a product in the frequency domain. Also consider that the Fourier Transform of an impulse is ALL frequencies uniformly--so what a great way to measure the frequency response by providing all frequencies at the input and seeing the result we get in frequency at the output (hence the FT of the impulse response IS the frequency response).</p>
<p>One problem with Fourier is that not all system impulse responses will have a Fourier Transform. The Fourier Transform is the result of the Laplace Transform when we restrict <span class="math-container">$s$</span> to be the <span class="math-container">$j\omega$</span> axis as the OP has graphically shown, and once we get our head around the "Region of Convergence" (ROC) we see that not all systems will have a ROC that includes the <span class="math-container">$j\omega$</span> axis-- for example systems with poles in the right half plane which are themselves unstable will not have a Fourier Transform. Yet these systems exist and we can stabilize them in control loops if we could only characterize them properly (Laplace can do this!).</p>
<p>To summarize some motivations for Laplace:</p>
<ul>
<li>Convolution in time domain is a product in Laplace</li>
<li>The Laplace Transform of the system's impulse response provides significant insight into it's behavior</li>
<li>The Laplace Transform converts integro-differential equations to simple algebra (consider the impedance of a capacitor which is 1/(sC) versus the integration we would need to solve when working with voltage and current relationships in a capacitor.</li>
</ul>
<p><strong>Intuitive Laplace</strong></p>
<p>That said, like Fourier, the Laplace Transform is a form of "correlation" in that it is an integration of complex conjugate products (see linked post above). The Fourier Transform results in a one dimensional plot of the magnitude (and phase) of this correlation versus the frequency variable for which we are correlating over. So from the magnitude we can see the "strength" of each frequency component within some arbitrary waveform <span class="math-container">$x(t)$</span> given by the Fourier Transform (correlation with <span class="math-container">$e^{j\omega t})$</span>:</p>
<p><span class="math-container">$$X(j\omega) = \int x(t)e^{-j\omega t}dt$$</span></p>
<p>Such as this example plot below where we have <span class="math-container">$\omega$</span> on one axis and show the magnitude on the other. Each sample actually has a magnitude and phase which we could show as separate plots, but important to note the results are complex quantities.</p>
<p><a href="https://i.sstatic.net/HRzGC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HRzGC.png" alt="example Fourier Transform" /></a></p>
<p>Laplace is of similar form, except we are "correlating" to <span class="math-container">$e^{st}$</span>:</p>
<p><span class="math-container">$$X(s) = \int x(t)e^{-s t}dt$$</span></p>
<p>With Fourier we decompose an arbitrary function into it's individual base components as spinning phasors, each rotating in time at a constant rate (thus a constant frequency) with a constant magnitude.</p>
<p>With Laplace we add the ability for these phasors to grow or decay with time, and thus we determine all the base components in an arbitrary waveform (often the "waveform" is the impulse response of a system as introduced above) as spinning phasors that are allowed to grow or decay in time. The result of the Laplace transform when plotted is a surface plot since we vary both parameter (rate of spin or frequency and rate of decay). We typically simply plot the singularities where this surface goes to infinity (as poles) and where it goes to zero (as zeros) since every other point on that surface is uniquely determined from those poles and zeros (thus it is all we need to show to completely represent it).</p>
<p>For example, as shown in the plot below, if our waveform (here an impulse response) was a decaying exponential given as <span class="math-container">$x(t)=e^{-2t}$</span> then the Laplace Transform would go to infinity at <span class="math-container">$s=-2$</span>, which is the one point where we are correlating to <span class="math-container">$e^{st}= e^{-2t}$</span> Makes sense!</p>
<p>This is the one and only location on the entire s-plane where the product <span class="math-container">$x(t)y(t)$</span> with <span class="math-container">$y(t)=e^{-st}$</span> (as we do when doing the Laplace Transform before integrating), where <span class="math-container">$y(t)$</span> will grow with time (as <span class="math-container">$e^{2t}$</span>) in such a way to perfectly counter-act the decaying <span class="math-container">$x(t)$</span>, resulting in a constant <span class="math-container">$1$</span> for all time-- and hence will grow to infinity as we integrate this product from <span class="math-container">$t=0$</span> to <span class="math-container">$t=\infty$</span> in the unilateral Laplace Transform applicable to causal waveforms. Similarly if <span class="math-container">$x(t)$</span> had any frequency components (spinning phasors), then the Laplace Transform will go to a singularity at infinity when <span class="math-container">$y(t)$</span> rotates exactly opposite (complex conjugate) and grows exactly opposite any decay. Very cool!</p>
<p><a href="https://i.sstatic.net/zp9Ke.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zp9Ke.png" alt="Laplace" /></a></p>
<p>So we also can now see the whole "Region of convergence" thing and what is going on there: We saw above how we found the pole by the location where <span class="math-container">$y(t)$</span> grew just enough to perfectly counteract <span class="math-container">$x(t)$</span>'s decay. The further we move <span class="math-container">$y(t)$</span>'s real component to the left on the real axis, the more it will grow, and thus the Laplace Transform won't converge to a solution for any points beyond that boundary. That boundary is specifically given as the right most pole for causal systems.</p>
| 150
|
Laplace transform
|
Relation between Laplace and Fourier transforms
|
https://dsp.stackexchange.com/questions/28100/relation-between-laplace-and-fourier-transforms
|
<p>I know that <span class="math-container">$$X_L(s) \Big|_{s=j\omega}=X_F(\omega)$$</span> if <span class="math-container">$x(t)$</span> is one sided and absolutely integrable and hence the imaginary axis of the Laplace transform is the Fourier transform.</p>
<p>But Fourier transform also has imaginary and real parts. So how could this be right?</p>
|
<p>The Laplace transform evaluated at $s=j\omega$ is equal to the Fourier transform if its region of convergence (ROC) contains the imaginary axis. This is also true for the bilateral (two-sided) Laplace transform, so the function need not be one-sided.</p>
<p>As for real and imaginary parts, since $s$ is a <em>complex</em> variable, both the Laplace and the Fourier transform generally have real and imaginary parts. Take as a simple example the function $x(t)=e^{-at}u(t)$, with $a>0$, where $u(t)$ is the unit step function. The Laplace transform is</p>
<p>$$X_L(s)=\frac{1}{s+a}\tag{1}$$</p>
<p>Since $a>0$, the ROC of $X_L(s)$ contains the imaginary axis, and the Fourier transform of $x(t)$ is simply obtained by evaluating $X_L(s)$ on the imaginary axis $s=j\omega$:</p>
<p>$$X_F(\omega)=X_L(j\omega)=\frac{1}{j\omega+a}\tag{2}$$</p>
<p>Since $s=\sigma+j\omega$ is generally complex, not only the Fourier transform but also the Laplace transform $(1)$ has a real and an imaginary part:</p>
<p>$$X_L(\sigma+j\omega)=\frac{1}{\sigma+j\omega+a}=\frac{\sigma+a}{(\sigma+a)^2+\omega^2}-j\frac{\omega}{(\sigma+a)^2+\omega^2}\tag{3}$$</p>
<p>Only when evaluated on the real axis $s=\sigma$ ($\omega=0$) does the imaginary part vanish.</p>
| 151
|
Laplace transform
|
Laplace transform of product of signal and impulse train
|
https://dsp.stackexchange.com/questions/40433/laplace-transform-of-product-of-signal-and-impulse-train
|
<p>I'm reading 'Discrete Time Control Systems' book by Ogata and came across a few statements (specifically, (3-1) and (3-2)) which I have not been able to understand.</p>
<p>It is said that any continuous signal can be sampled and the output represented as
$$y(t) = \sum_{n=- \infty}^{+\infty}x(nT)\delta(t-nT) $$ </p>
<p>Now taking laplace transform
$$\begin{align}
Y(s) &= \sum_{n=- \infty}^{+\infty}x(nT)\mathscr{L}\{\delta(t-nT)\} \\
&= \sum_{n=- \infty}^{+\infty}x(nT)e^{-nTs} \\
\end{align}$$</p>
<p>Now I have a confusion:</p>
<p>Is the $\delta(t)$ function </p>
<ol>
<li>the dirac delta function, so that $\mathscr{L}\{\delta(t-nT)\} = e^{-nTs} $ but then the signal representation makes no sense as there is infinite amplitude in the output signal at multiples of $nT$</li>
<li>or is it the unit impulse function (value $1$ at $t=0$ and value $0$ everywhere else) in which case how exactly has $Y(s)$ been evaluated?</li>
</ol>
|
<p>since no one else seems to have said it, if the ideally-sampled $x(t)$ is defined as</p>
<p>$$x_\text{s}(t) \triangleq \sum_{n=-\infty}^{+\infty}x(nT)\delta(t-nT) $$</p>
<p>and we define discrete-time samples as $x[n] \triangleq x(nT)$, the Laplace transform of</p>
<p>$$\begin{align}
X_\text{s}(s) &= \sum_{n=- \infty}^{+\infty}\mathscr{L}\{x(nT) \delta(t-nT)\} \\
&= \sum_{n=- \infty}^{+\infty}x[n] \mathscr{L}\{\delta(t-nT)\} \\
&= \sum_{n=- \infty}^{+\infty}x[n] e^{-nTs} \\
&= \sum_{n=- \infty}^{+\infty}x[n]z^{-n} \\
&= \mathcal{Z}\{x[n]\} \Bigg|_{z=e^{sT}} \\
\end{align}$$</p>
<p>or, if I abuse the notation a little and change the meaning of $X(\cdot)$, the Z-Transform of $x[n]$ is related to the Laplace Transform of </p>
<p>$$ \mathcal{Z}\{x[n]\}\Bigg|_{z=e^{sT}} = X(z)\Bigg|_{z=e^{sT}} = \mathscr{L}\{x_\text{s}(t)\} $$</p>
<p>So the Z-Transform of a discrete-time signal is nothing other than the Laplace Transform of the ideally-sampled continuous-time corresponding.</p>
| 152
|
Laplace transform
|
Why is the ROC of Laplace transform independent of imaginary part of s?
|
https://dsp.stackexchange.com/questions/56793/why-is-the-roc-of-laplace-transform-independent-of-imaginary-part-of-s
|
<p>An integral is defined as converging if it yields a finite value upon application of limits of integration. It is divergent otherwise.</p>
<p>Now sticking to the mathematical notation of Laplace transform, we have for a causal function <span class="math-container">$x(t) = u(t)$</span>:
<span class="math-container">$$
X(s) = \int_0^\infty x(t)e^{-st}dt = \int_0^\infty e^{-st}dt
$$</span></p>
<p><span class="math-container">$$
X(s) = -\frac{1}{s}e^{-st}\Biggr|_{0}^{\infty} = \frac{1}{s} = \frac{1}{(\sigma + jw)}
$$</span> </p>
<p>By multiplying numerator and denominator with <span class="math-container">$(\sigma - jw)$</span>, we get:
<span class="math-container">$$
X(s) = \frac{(\sigma - jw)}{(\sigma^2 +\omega^2)}
$$</span></p>
<p>For laplace transform not to exist, the denominator must become 0. Hence in this contrived example, both <span class="math-container">$\sigma$</span> and <span class="math-container">$w$</span> must be 0. </p>
<p>Conversely, if <span class="math-container">$\sigma = 0$</span> and <span class="math-container">$w \neq 0$</span>, <span class="math-container">$X(s)$</span> exists as <span class="math-container">$\frac{-j}{\omega}$</span> with a magnitude of <span class="math-container">$\frac{1}{\omega}$</span>.</p>
<p>Unless my basics are messed up, why in the literature is <span class="math-container">$jw$</span> disregarded as influencing <span class="math-container">$X(s)$</span>? In other words, why is the ROC dependent only on <span class="math-container">$\sigma$</span>?</p>
|
<p>I think your misunderstanding is that you manipulate an expression, which is only valid for <span class="math-container">$\text{Re}\{s\}>0$</span>, in order to show for which values of <span class="math-container">$s$</span> it might be valid.</p>
<p>Note that the integral</p>
<p><span class="math-container">$$\mathcal{L}\{u(t)\}=\int_0^{\infty}e^{-st}dt$$</span></p>
<p>only converges for <span class="math-container">$\text{Re}\{s\}>0$</span>, so the result <span class="math-container">$1/s$</span> does not make any sense for other values of <span class="math-container">$s$</span>.</p>
<p>Only the real part of <span class="math-container">$s$</span> in the term <span class="math-container">$e^{-st}$</span> inside the integral can provide the necessary damping such that the integrand decays sufficiently fast for the integral to converge. That's why the region of convergence only depends on the real part of <span class="math-container">$s$</span>.</p>
| 153
|
Laplace transform
|
Impulse response of a causal LTI system without using Laplace transform
|
https://dsp.stackexchange.com/questions/93642/impulse-response-of-a-causal-lti-system-without-using-laplace-transform
|
<p>I have this differential equation that models a causal LTI system:
<span class="math-container">$$
\ddot{v}(t) - \dot{v}(t) - 2v(t) = \ddot{u}(t) + 2\dot{u}(t) + u(t)
$$</span></p>
<p>I was asked to find the impulse response both by using Laplace transform and by solving the ODE.</p>
<p>The first method is quite simple: set initial conditions to <span class="math-container">$0$</span>, then apply Laplace transform to the left and right side; this brings to:
<span class="math-container">$$
\begin{align}
s^2V(s) - sV(s) - 2V(s) & = s^2U(s) + 2sU(s) + U(s)\\
(s^2 - s - 2)V(s)& = (s^2 + 2s + 1)U(s)
\end{align}
$$</span>
Knowing that <span class="math-container">$\mathcal{L}[\delta_0](s) = 1 = U(s)$</span>, we get
<span class="math-container">$$
V(s) = \frac{s^2 + 2s + 1}{s^2 - s - 2}
$$</span>
Rewriting as partial fractions:
<span class="math-container">$$
V(s) = 1 + \frac{3}{s - 2}
$$</span>
Applying the inverse Laplace transform we get the result:
<span class="math-container">$$
v_{\delta_0}(t) = \delta_0(t) + 3e^{2t}\delta_{-1}(t) = h(t)
$$</span></p>
<p>Trying to solve it the other way leads to a crossroads; more details in a moment. First, we need the <em>characteristic polynomial</em>:
<span class="math-container">$$
P(s) = s^2 - s - 2
$$</span>
Solving for <span class="math-container">$P(s) = 0$</span> gives:
<span class="math-container">$$
s_1 = -1\quad s_2 = 2
$$</span>
Therefore, we know <span class="math-container">$h(t)$</span> is of the form:
<span class="math-container">$$
h(t) = d_0\delta_0(t) + d_1e^{-t}\delta_{-1}(t) + d_2e^{2t}\delta_{-1}(t)
$$</span>
since the ODE output's degree matches the input's. We now have to differentiate <span class="math-container">$h(t)$</span> twice and replace it into the ODE itself. The first derivative results:
<span class="math-container">$$
\dot{h}(t) = d_0\dot{\delta_0}(t) + d_1(-e^{-t}\delta_{-1}(t) + e^{-t}\delta_0(t)) + d_2(2e^{2t}\delta_{-1}(t) + e^{2t}\delta_0(t))
$$</span>
knowing that <span class="math-container">$\dot{\delta}_{-1}(t) = \delta_0(t)$</span>. Now I had two alternatives: the first, to replace <span class="math-container">$e^{-t}\delta_0(t)$</span> and similars with <span class="math-container">$\delta_0(t)$</span>, since <span class="math-container">$\delta_0(t)$</span> is non-zero only at <span class="math-container">$t=0$</span>, and <span class="math-container">$e^{0} = 1$</span>; the second, to leave it as it is and keep differentiating. I write the two paths:</p>
<ol>
<li><p>Not replacing:
<span class="math-container">$$
\begin{align}
\ddot{h}(t) & = d_0\ddot{\delta}_0(t) -d_1(-e^{-t}\delta_{-1}(t) + e^{-t}\delta_0(t)) + d_1(-e^{-t}\delta_0(t) + e^{-t}\dot{\delta}_0(t))\;+ \\
&\phantom{=}\;\; d_2(4e^{2t}\delta_{-1}(t) + 2e^{2t}\delta_0(t)) + d_2(2e^{2t}\delta_0(t) + e^{2t}\dot{\delta}_0(t))
\end{align}
$$</span>
Combining the <span class="math-container">$h(t)$</span>s into the ODE:
<span class="math-container">$$
\begin{align}
d_0\ddot{\delta}_0(t) + d_1e^{-t}\delta_{-1}(t) -d_1e^{-t}\delta_0(t) - d_1e^{-t}\delta_0(t) + d_1e^{-t}\dot{\delta}_0(t)\; & + \\
4d_2e^{2t}\delta_{-1}(t) + 2d_2e^{2t}\delta_0(t) + 2d_2e^{2t}\delta_0(t)+ d_2e^{2t}\dot{\delta}_0(t)\; & +\\
- d_0\dot{\delta_0}(t) + d_1e^{-t}\delta_{-1}(t) -d_1e^{-t}\delta_0(t) - 2d_2e^{2t}\delta_{-1}(t) -d_2e^{2t}\delta_0(t)\; & +\\
-2d_0\delta_0(t) -2d_1e^{-t}\delta_{-1}(t) -2d_2e^{2t}\delta_{-1}(t) & = \ddot{\delta}_0(t) + 2\dot{\delta}_0(t) + \delta_0(t)
\end{align}
$$</span>
Many terms cancel out; we only need to solve this system of linear equations (considering <span class="math-container">$t = 0\to e^0=1$</span> if necessary):
<span class="math-container">$$
\begin{cases}
d_0 = 1\\
d_1 + d_2 - d_0 = 2\\
3d_2 - 3d_1 - 2d_0 = 1
\end{cases}
$$</span>
which has <span class="math-container">$(d_0, d_1, d_2) = (2, 1, 1)$</span> as solution. In conclusion:
<span class="math-container">$$
h(t) = 1\cdot\delta_0(t) + 2\cdot e^{2t}\delta_{-1}(t) + 1\cdot e^{-t}\delta_{-1}(t) = \underline{\delta_0(t) + 2e^{2t}\delta_{-1}(t) + e^{-t}\delta_{-1}(t)}
$$</span>
Hmm. Not quite like with the Laplace transform.</p>
</li>
<li><p>Replacing:
<span class="math-container">$$
\begin{align}
\dot{h}(t) & = d_0\dot{\delta_0}(t) + d_1(-e^{-t}\delta_{-1}(t) + \delta_0(t)) + d_2(2e^{2t}\delta_{-1}(t) + \delta_0(t))\\
& = d_0\dot{\delta_0}(t) - d_1e^{-t}\delta_{-1}(t) +d_1\delta_0(t) + 2d_2e^{2t}\delta_{-1}(t) + d_2\delta_0(t)
\end{align}
$$</span>
Now <span class="math-container">$\ddot{h}(t)$</span>:
<span class="math-container">$$
\begin{align}
\ddot{h}(t) & = d_0\ddot{\delta_0}(t) -d_1(-e^{-t}\delta_{-1}(t) + \delta_{0}(t)) + d_1\dot{\delta}_0(t) + 2d_2(2e^{2t}\delta_{-1}(t) + \delta_0(t)) + d_2\dot{\delta}_{0}(t)\\
&= d_0\ddot{\delta_0}(t) + d_1e^{-t}\delta_{-1}(t) - d_1\delta_{0}(t) + d_1\dot{\delta}_0(t) + 4d_2e^{2t}\delta_{-1}(t) + 2\delta_0(t) + d_2\dot{\delta}_{0}(t)
\end{align}
$$</span>
applying the same substitutions as before. Putting all together:
<span class="math-container">$$
\begin{align}
d_0\ddot{\delta_0}(t) + d_1e^{-t}\delta_{-1}(t) - d_1\delta_{0}(t) + d_1\dot{\delta}_0(t) + 4d_2e^{2t}\delta_{-1}(t) + 2d_2\delta_0(t) + d_2\dot{\delta}_{0}(t)\;& + \\
-d_0\dot{\delta_0}(t) + d_1e^{-t}\delta_{-1}(t) - d_1\delta_0(t) - 2d_2e^{2t}\delta_{-1}(t) - d_2\delta_0(t)\;& + \\
-2d_0\delta_0(t) -2d_1e^{-t}\delta_{-1}(t) -2d_2e^{2t}\delta_{-1}(t) & = \ddot{\delta}_0(t) + 2\dot{\delta}_0(t) + \delta_0(t)
\end{align}
$$</span>
We now need to solve this, after canceling some terms:
<span class="math-container">$$
\begin{cases}
d_0 = 1\\
-2d_1 + d_2 - 2d_0 = 1\\
d_1 - d_0 + d_2 = 2
\end{cases}
$$</span>
which has this solution: <span class="math-container">$d_0 = 1, d_1 = 0, d_2 = 3$</span>. Placing the coefficients we found into the original formula:
<span class="math-container">$$
h(t) = 1\cdot\delta_0(t) + 0\cdot e^{-t}\delta_{-1}(t) + 3\cdot e^{2t}\delta_{-1}(t) = \underline{\delta_0(t) + 3e^{2t}\delta_{-1}(t)}
$$</span>
which is the exact solution we found with Laplace transform.</p>
</li>
</ol>
<p>After this <em>brief</em> preamble, the point is that every such exercise in my workbook has the solution obtained by the first method (without substituting terms), even though it is not the same as the one found with the Laplace transform. Sometimes both solutions are reported (also when they are clearly different, instead). So, my questions are: can these solutions be considered equivalent? Isn't it wrong to perform the substitution by removing part of the signal derivatives? Why does the first method produce a different solution?</p>
<p>Thanks in advance to everyone who will help me out.</p>
|
<p>The problem with your first approach is that you assume</p>
<p><span class="math-container">$$f(t)\delta'(t)\stackrel{?}{=}f(0)\delta'(t)\tag{1}$$</span></p>
<p>which is wrong.</p>
<p>The correct equation is</p>
<p><span class="math-container">$$f(t)\delta'(t)=f(0)\delta'(t)-f'(0)\delta(t)\tag{2}$$</span></p>
<p>Equation <span class="math-container">$(2)$</span> is easily derived as follows:</p>
<p><span class="math-container">$$\big(f(t)\delta(t)\big)' = \big(f(0)\delta(t)\big)'= f(0)\delta'(t)\tag{3}$$</span></p>
<p>From the product rule we get</p>
<p><span class="math-container">$$\big(f(t)\delta(t)\big)' = f'(t)\delta(t)+f(t)\delta'(t)=f'(0)\delta(t)+f(t)\delta'(t)\tag{4}$$</span></p>
<p>Equating <span class="math-container">$(3)$</span> and <span class="math-container">$(4)$</span> results in <span class="math-container">$(2)$</span>.</p>
<p>Using <span class="math-container">$(2)$</span> instead of <span class="math-container">$(1)$</span> in your calculations will give you the same result as with the other method.</p>
| 154
|
Laplace transform
|
What is relationship between the Laplace transform of the ideally-sampled signal and that of the original continuous signal?
|
https://dsp.stackexchange.com/questions/95098/what-is-relationship-between-the-laplace-transform-of-the-ideally-sampled-signal
|
<p>Suppose a continuous signal <span class="math-container">$x(t)$</span>, the Laplace transform of <span class="math-container">$x(t)$</span> is <span class="math-container">$X(s)$</span>. Suppose the ideally-sampled signal of <span class="math-container">$x(t)$</span> is <span class="math-container">$\hat{x}(t)=\sum\limits_{n=-\infty}\limits^{\infty}x(nT)\delta(t-nT)$</span>, the Laplace transform of <span class="math-container">$\hat{x}(t)$</span> is <span class="math-container">$\hat{X}(s)$</span>. That what is the relationship of between <span class="math-container">$X(s)$</span> and <span class="math-container">$\hat{X}(s)$</span>?</p>
<p>I know that the Fourier transform of <span class="math-container">$\hat{x}(t)$</span> is the periodic extension of that of <span class="math-container">${x}(t)$</span>. Does the Laplace transform also follow such conclusion?</p>
|
<p>Yes it follows the same because Laplace is nothing but making your non convergence or unstable signal to stable by ROC(Sigma) and taking Fourier transform of it. So it follows the same.</p>
| 155
|
Laplace transform
|
Can a Fourier Transform exist even if the j$\omega$ axis is not in the Region of Convergence in it's Laplace Transform
|
https://dsp.stackexchange.com/questions/53875/can-a-fourier-transform-exist-even-if-the-j-omega-axis-is-not-in-the-region-of
|
<p>A couple of confusions have been occurred. The Signal I'm considering is <strong>f(t) = sin(t)*u(t)</strong></p>
<ol>
<li><p>Fourier Transform of it can be derived.
<em><span class="math-container">$-i \pi (\delta (\omega -1)-\delta (\omega +1))$</span></em></p></li>
<li><p>According to my mathematica code, the ROC of LaplaceTransformation didn't have the j<span class="math-container">$\omega$</span> axis in it's region of convergence.( <strong>Re{s}>0</strong> )</p></li>
</ol>
<p>So it's not stable. (Makes sense. sin(t) is not absolutely summable)
<a href="https://www.wolframcloud.com/objects/ramithuh/Published/misc_sin_laplace.nb" rel="nofollow noreferrer">https://www.wolframcloud.com/objects/ramithuh/Published/misc_sin_laplace.nb</a></p>
<ol start="3">
<li>So, can a Fourier Transform exist even if the j<span class="math-container">$\omega$</span> axis is not in the Region of Convergence in it's Laplace Transform? </li>
</ol>
<p>Things got worse when I considered <strong>f(t) = sin(t)</strong>. It's Laplace Transform Integral didn't converge. So considering <strong>sin(t)*u(t)</strong> and s<strong>in(t)*u(-t)</strong> separately, I got two different ROC which doesn't have an overlap. <strong><em>Re{s} > 0</em></strong> and <strong><em>Re{s} < 0</em></strong>. So it means that the Laplace transform of Sin(t) doesn't exist right? Initially what i thought is Laplace Transform can cover all the signals which FourierTransform covers.
Turns out it's not the case?</p>
<p>Please point out in which step of my reasoning is wrong...</p>
<p>Thanks a bunch! :)</p>
<p><strong>Update: Thank you for pointing out my mistake. :D
<span class="math-container">$ \mathcal{F(sin(t)*u(t))} = -\frac{1}{2} i \pi \delta (\omega -1)+\frac{1}{2} i \pi \delta (\omega +1)-\frac{1}{2 (\omega -1)}+\frac{1}{2 (\omega +1)}$</span></strong></p>
|
<p>You're right that the Laplace transform is <em>not</em> more general than the Fourier transform. They are just different. There are several (theoretically) important functions for which the Laplace transform doesn't exist, but the Fourier transform does. A few examples are</p>
<ol>
<li><span class="math-container">$x(t)=e^{j\omega_0t}$</span></li>
<li><span class="math-container">$x(t)=\sin(\omega_0t+\phi)$</span></li>
<li><span class="math-container">$x(t)= \textrm{sinc}(\omega_0t)$</span></li>
<li><span class="math-container">$x(t)=\frac{1}{\pi t}$</span></li>
<li><span class="math-container">$x(t)=\textrm{sign}(t)$</span></li>
</ol>
<p>The first two involve Dirac impulses in their Fourier transform, whereas the third and the fourth have discontinuous Fourier transforms. I hope it is obvious why the complex exponential and the sine are important. The <span class="math-container">$\textrm{sinc}$</span> function is needed to represent the impulse response of ideal frequency-selective filters, such as low pass and high pass filters, and <span class="math-container">$1/(\pi t)$</span> is the impulse response of an ideal Hilbert transformer. Note that those functions are non-causal (i.e., they are not zero for <span class="math-container">$t<0$</span>). The class of non-causal functions for which the (two-sided) Laplace transform exists is quite restricted because damping through multiplication with <span class="math-container">$e^{st}$</span> only works either for <span class="math-container">$t>0$</span> (if <span class="math-container">$\textrm{Re}\{s\}<0$</span>) or for <span class="math-container">$t<0$</span> (if <span class="math-container">$\textrm{Re}\{s\}>0$</span>), but not for both.</p>
<p>In your first example (<span class="math-container">$\sin(t)u(t)$</span>) you got the Fourier transform wrong, even though it does contain Dirac impulses. The Fourier transform has Dirac impulses whenever the Laplace transform (if it exists) has poles on the imaginary axis.</p>
| 156
|
Laplace transform
|
Determining Stability of a continuous time system using Laplace Transform
|
https://dsp.stackexchange.com/questions/60662/determining-stability-of-a-continuous-time-system-using-laplace-transform
|
<p>I'm following Oppenheim's book. In exapmles, the laplace transforms of of the following signals </p>
<p><span class="math-container">$e^{-t}u(t)$</span> and <span class="math-container">$e^{-t-1}u(t+1)$</span> </p>
<p>is given as <span class="math-container">$\frac{s}{(s+1)}$</span> and <span class="math-container">$\frac{e^{-s}}{(s+1)}$</span> both having the same ROC <span class="math-container">$Re(s)>-1$</span>.</p>
<p>however, the first signal is causal and the second one is non-causal. The system function is not rational for the second one.But why is that the problem when the ROC is essentially same?</p>
|
<p>ROC is a strip of plane which does not contain any poles. The region is determined by the location of poles. </p>
<p>The first signal, <span class="math-container">$x(t) = e^{-t} u(t)$</span> , has the Laplace transform <span class="math-container">$X(s) = \frac{1}{s+1}$</span> and its ROC is <span class="math-container">$ \mathcal{Re}\{s\} > -1 $</span> , since <span class="math-container">$x(t)$</span> was defined to be causal.</p>
<p>The second signal, <span class="math-container">$y(t)$</span>, is obtained by advancing <span class="math-container">$x(t)$</span> by <span class="math-container">$1$</span> unit in time. Hence <span class="math-container">$y(t) = x(t+1)$</span>. By Laplace transform property it's seen that <span class="math-container">$Y(s) = \frac{e^{s}}{s+1}$</span> , and it has the same ROC with <span class="math-container">$X(s)$</span>, since they are both <strong>right sided</strong> sequences. However, <span class="math-container">$y(t)$</span> is not causal despite being right sided, due to the advance of <span class="math-container">$1$</span> unit to the left. </p>
| 157
|
Laplace transform
|
LTI system with Laplace transform
|
https://dsp.stackexchange.com/questions/42004/lti-system-with-laplace-transform
|
<p>Given the input $$x(t)=u(t)$$ and the corresponding output signal measured as $$y(t)= 2 e^{-3t} u(t)$$ determine the impulse response $h(t)$.</p>
<p>This what have done so far:
$$ h(t)= \mathscr{L}^{-1} \left\{ \frac{Y(s)}{X(s)} \right\} = \frac{2/(s+3)}{1/s}
= \frac{2s}{s+3} $$. </p>
<p>I need to find the Laplace inverse of this, I can't figure out the approach.</p>
|
<p>Your approach is correct. Rewrite $H(s)$ as</p>
<p>$$H(s)=\frac{2s}{s+3}=2\frac{s+3-3}{s+3}=2-\frac{6}{s+3}\tag{1}$$</p>
<p>and use basic Laplace transform identities to obtain $h(t)$ from $(1)$.</p>
<p>Note that you don't need to use the Laplace transform. A time domain approach as suggested in <a href="https://dsp.stackexchange.com/a/42009/4298">oxuf's answer</a> is even more straightforward.</p>
| 158
|
Laplace transform
|
Can use of Fourier transform be minimized completely with the help of Laplace and Z transform?
|
https://dsp.stackexchange.com/questions/31415/can-use-of-fourier-transform-be-minimized-completely-with-the-help-of-laplace-an
|
<p>Fourier transform has different types like continuous Fourier transform (CFT), Discrete time Fourier transform (DTFT) and Discrete Fourier transform ( DFT).</p>
<p>CFT can be used in case of continuous aperiodic signals while DFT for discrete aperiodic signals . </p>
<p>On the other hand, Laplace transform can be used in case of continuous signals and Z transform for discrete signals.</p>
<p>So I want to ask that can use of Fourier transform be minimized completely with the help of Laplace and Z transform?</p>
|
<p>The answer to your last question is definitely 'no'. The point hotpaw2 makes in <a href="https://dsp.stackexchange.com/a/31416/4298">his answer</a> is very relevant: the FFT is an efficient implementation of the DFT, and there are no equivalently efficient implementations for the numerical computation of the $\mathcal{Z}$-transform or the Laplace transform.</p>
<p>But that's not the only reason. There are important functions (or sequences) for which the Laplace transform or the $\mathcal{Z}$-transform don't even exist, whereas the Fourier transform does. E.g., take a sinusoid or a complex exponential extending from $-\infty$ to $\infty$. These functions (or sequences) don't have a Laplace transform or a $\mathcal{Z}$-transform. Other important examples are impulse responses of ideal frequency selective filters such as low pass or high pass. They can be represented in terms of the sinc function, which can only be transformed using the Fourier transform, but not using the (bilateral) Laplace transform or - in discrete time - the (bilateral) $\mathcal{Z}$-transform.</p>
<p>So even if formally it looks like the Fourier transform is a special case of the Laplace transform or the $\mathcal{Z}$-transform, that's generally not case. One reason for that is the incorporation of the theory of distributions in the theory of the Fourier transform (i.e., the use the Dirac delta impulse), which makes it possible to compute the transform of functions like $\sin(\omega_0t)$ or $e^{j\omega t}$. The latter is not possible using the (bilateral) Laplace transform (or, in discrete time, the bilateral $\mathcal{Z}$-transform).</p>
<p>When people see the definitions of the (bilateral) Laplace transform and of the Fourier transform</p>
<p>$$X_L(s)=\int_{-\infty}^{\infty}x(t)e^{-st}dt\\
X_F(j\omega)=\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt\tag{1}$$</p>
<p>it may seem obvious to them that both transforms become identical by substituting $s=j\omega$. This, however, is generally not true. The pitfall here is the fact that the substitution does not take into account the convergence of the improper integrals. Depending on $x(t)$, the Laplace integral might not converge for $s=j\omega$, so $X_F(j\omega)$ might not even exist. The substitution is only valid if the region of convergence (ROC) of the Laplace integral includes the $j\omega$-axis. A completely analogous argument is true for the $\mathcal{Z}$-transform and the DTFT. In that case the substitution $z=e^{j\omega}$ is only valid if the ROC includes the unit circle.</p>
<p>The last paragraph may seem to imply that the Laplace transform and the $\mathcal{Z}$-transform are simply more general than the respective versions of the Fourier transform. However, this is also not true, as already mentioned above, because there are functions (sequences) that can only be treated by the Fourier transform, but not by the Laplace transform ($\mathcal{Z}$-transform).</p>
<p>Also take a look at the following answers to related questions: <a href="https://dsp.stackexchange.com/a/26178/4298">answer 1</a>, <a href="https://dsp.stackexchange.com/a/15356/4298">answer 2</a>, <a href="https://dsp.stackexchange.com/a/26776/4298">answer 3</a>.</p>
| 159
|
Laplace transform
|
Inverse Laplace Transform
|
https://dsp.stackexchange.com/questions/60664/inverse-laplace-transform
|
<p>A system given by <span class="math-container">$\frac{s-1}{(s+1)(s-2)}$</span> has to be inverse transformed so that it is anticausal and nonstable. The answer given is <span class="math-container">$h(t)=-\frac{1}{3}(2e^{-t}+e^{2t})u(-t)$</span></p>
<p>Why the minus sign at the beginning?</p>
|
<p>First you have to remember a Laplace transform property:</p>
<p><span class="math-container">$$ e^{a t} u(t) \longleftrightarrow \frac{1}{s-a} ~~~,~~~ \mathcal{Re}\{s\} > \mathcal{Re}\{a\} \tag{1} $$</span></p>
<p><span class="math-container">$$ -e^{a t} u(-t) \longleftrightarrow \frac{1}{s-a} ~~~,~~~ \mathcal{Re}\{s\} < \mathcal{Re}\{a\} \tag{2} $$</span></p>
<p>This property states that a given Laplace transform <span class="math-container">$H(s)$</span> can correspond to multiple inverse transforms, depending on the region of convergence ROC. </p>
<p>Therefore given <span class="math-container">$H(s) = \frac{1}{s-a}$</span> , you can find two possible inverses as <span class="math-container">$x(t) = e^{at} u(t)$</span> or <span class="math-container">$x(t) = -e^{at} u(-t)$</span> depending on whether the ROC is to the left or right of the pole location. Note that one of them is causal and the complementary is anti-causal.</p>
<p>Now, given your transfer function </p>
<p><span class="math-container">$$ H(s) = \frac{s-1}{(s+1)(s-2)} = \frac{2/3}{s+1} + \frac{1/3}{s-2} , \tag{3} $$</span> </p>
<p>it has two poles at <span class="math-container">$s = -1$</span> and <span class="math-container">$s=2$</span>. There will be three possible ROC's with three different inverses :</p>
<p><span class="math-container">$$
\begin{align}
\mathcal{Re}\{s\} < \mathcal{Re}\{-1\} & \implies h(t) = -\frac{2}{3} e^{-t}u(-t) - \frac{1}{3} e^{2t} u(-t) \tag{4} \\
\mathcal{Re}\{-1\} < \mathcal{Re}\{s\} < \mathcal{Re}\{2\} &\implies h(t) = \frac{2}{3} e^{-t}u(t) - \frac{1}{3} e^{2t} u(-t) \tag{5} \\
\mathcal{Re}\{s\} > \mathcal{Re}\{2\} & \implies h(t) = \frac{2}{3} e^{-t}u(t) + \frac{1}{3} e^{2t} u(t) \tag{6}
\end{align}
$$</span></p>
<p>The impulse response in (4) is anti-causal and unstable.</p>
<p>The impulse response in (5) is non-causal and stable.</p>
<p>The impulse response in (6) is causal and unstable.</p>
| 160
|
Laplace transform
|
Meaning and unit of frequency in Laplace (Fourier) transform
|
https://dsp.stackexchange.com/questions/43733/meaning-and-unit-of-frequency-in-laplace-fourier-transform
|
<p>Imagine transfer function obtained by Laplace transform, for example:</p>
<p>$G(s) = \dfrac{1}{s+1}$</p>
<p>Now, I would like to do some frequency analysis, so I replace the $s$ with $\omega i$ (let's consider this operation valid for this example).</p>
<p>What is the unit of the $\omega$? So far what I have seen, the $\omega$ is noted as frequency or angular velocity. I asked my colleagues and I got various answers:</p>
<ul>
<li>rad/sec</li>
<li>Hz</li>
<li>no unit</li>
</ul>
<p>What is correct and why? Does it depend on real variable passed to transform (if somebody uses different variable than time)?</p>
|
<p>If you are dealing with the Laplace transform $G(s)$ of a <strong>time</strong> domain signal $g(t)$ and its evaluation on the imaginary axis to get the Fourier transform $G(j\omega)$ (assuming it exists) then the unit of your frequency $\omega$ is <strong>radians per second</strong> assuming the unit of the time was seconds. </p>
<p>It's relation to cyclic frequency is : $$\omega = 2 \pi f$$ where $f$ is the frequency in Hz (<strong>cycles per second</strong>).</p>
<p>On the other hand if the initial function was like $g(x)$ where $x$ was a spatial variable with unit of meters, then the transform domain frequency unit will be in <strong>radians per meter</strong> where its relation to space-frequency is still the same with $\omega = 2 \pi f$ where $f$ will have the unit of <strong>cycles per meter</strong> </p>
| 161
|
Laplace transform
|
Why can you use the one-sided laplace transform to solve differential equation describing a causal LTI-system?
|
https://dsp.stackexchange.com/questions/71810/why-can-you-use-the-one-sided-laplace-transform-to-solve-differential-equation-d
|
<p>In an example, an equation describing a causal LTI-system is</p>
<p><span class="math-container">$$
(D^2 + 5D + 6) y(t) = (D+1) x(t)
$$</span></p>
<p>where <span class="math-container">$y(t) = y_{zs}(t) + y_{zi}(t)$</span> and the initial conditions are <span class="math-container">$y(0^-) = 2, \dot{y}(0^-) = 1$</span>.<br />
<span class="math-container">$x(t) = e^{-4t}u(t)$</span> and we want to calculate <span class="math-container">$y(t)$</span>.</p>
<p>My teacher said in an example of a solution that because <span class="math-container">$x(t) = 0, \quad t < 0$</span> we can take the one-sided/unilateral laplace transform of the RHS (since then the unilateral and bilateral laplace transform are the same, I guess ). Further, the explanation continued with that because the system is causal, the impulse response <span class="math-container">$h(t) = 0$</span> for <span class="math-container">$t < 0$</span> and therefore <span class="math-container">$y_{zs}(t) = (x*h)(t) = 0$</span> for <span class="math-container">$t < 0$</span>. Therefore, <strong>supposedly, <span class="math-container">$y(t<0)=0$</span> and we can take the one-sided laplace transform of the LHS too</strong>.</p>
<p>I am questioning the part in boldface, because what about the zero-input response <span class="math-container">$y_{zi}(t)$</span>, how do we know its value for <span class="math-container">$t<0$</span>? I would like to belive it is not equal to <span class="math-container">$0$</span> for <span class="math-container">$t<0$</span> due to the initial conditions given at <span class="math-container">$0^-$</span> not being zero and <span class="math-container">$0^- < 0$</span>. If it is not equal to <span class="math-container">$0$</span> for <span class="math-container">$t<0$</span> how can we know we can take the one-sided laplace transform of the LHS?</p>
<p>Note: I also read <a href="https://dsp.stackexchange.com/questions/37731/differential-equations-and-lti-systems">this question</a> in which it is stated that non-zero initial conditions make the system non-linear and time-varying, which also makes me think the equation at hand is inconsistent with the fact that it describes an LTI-system?</p>
<p><strong>Edit 1</strong> in response to the comment about Lathi's linear systems and signals:<br />
I did not remember that this example was indeed from the book, but I have read the example and the section "Comments on initial conditions at <span class="math-container">$0^-$</span> and at <span class="math-container">$0^+$</span> " before. The section explains that we cannot expect the total response <span class="math-container">$y(t)$</span> to satisfy the initial conditions given at <span class="math-container">$0^-$</span> at <span class="math-container">$0$</span>, which makes perfect sense I think since <span class="math-container">$0^- \neq 0$</span>. It goes on to say that there is another version of the laplace transform, <span class="math-container">$L_+$</span> that is not as convenient to work with. The author's discussion might have gone a bit over my head, because unfortunately I can't understand how it answers my questions, that is how we can assume <span class="math-container">$y(t)$</span> to be causal (in order to use the unilateral laplace transform on both sides of the equation) and the note about the non-zero initial conditions implying the system to not be LTI.</p>
<p>The reason I have not accepted the answer given so far is that it is a bit to advanced for me to judge its correctness and therefore I wanted to wait a bit in case there would be more input for my question. But eventually I will just assume it is correct and accept it.</p>
|
<blockquote>
<p>If it is not equal to 0 for t<0 how can we know we can take the one-sided laplace transform of the LHS?</p>
</blockquote>
<p>Unfortunately, the way the problem is framed it is very difficult to do in a rigorous manner. You <em>can</em>, by remembering that <span class="math-container">$0^-$</span> is shorthand for <span class="math-container">$-\epsilon$</span> as <span class="math-container">$\epsilon \to 0$</span>, and taking limits everywhere, <strong>and</strong> having expressions where you're looking at intervals of <span class="math-container">$t$</span> from <span class="math-container">$-\infty$</span> to <span class="math-container">$-2\epsilon$</span> and other maddening things.</p>
<p>The easy way to do it is to separate the problem: solve the problem for the <span class="math-container">$x(t)$</span> that's given. Then <em>augment</em> the problem by finding an <span class="math-container">$x(t)$</span> that is zero everywhere except in <span class="math-container">$0^- < t < 0^+$</span> that will result in <span class="math-container">$y(0^+) = 2$</span> (note the <span class="math-container">$0^+$</span> instead of <span class="math-container">$0^-$</span>) and <span class="math-container">$\dot y(0^+) = 1$</span>. This involves setting <span class="math-container">$x(t)$</span> to a linear combination of <span class="math-container">$\delta(t)$</span> and <span class="math-container">$\delta^2(t)$</span> (as if <span class="math-container">$\delta(t)$</span> weren't wacky enough, <span class="math-container">$\delta^2(t) = d/dt\ \delta(t)$</span>).</p>
<p>Then add the two solutions together. It may not be what your prof had in mind, but it takes the least amount of hand-waving, and if you really want rigor you can break <span class="math-container">$\epsilon$</span> and put it to work.</p>
<p>By doing the above trick of expressing <span class="math-container">$x(t)$</span> as a linear combination of the specified <span class="math-container">$x(t)$</span> plus whatever sum of the Dirac impulse and it's derivatives, you cast the problem into one where the system <em>is</em> linear -- you're just using physically impossible values for <span class="math-container">$x(t)$</span>.</p>
| 162
|
Laplace transform
|
How can I plot a 3D graph of a given Laplace Transform of a function?
|
https://dsp.stackexchange.com/questions/40628/how-can-i-plot-a-3d-graph-of-a-given-laplace-transform-of-a-function
|
<p>Let's say I have a function called $f(t)$ in time domain as: </p>
<p>$$f(t) = \exp(-3t)\cos(5t)$$</p>
<p>And the Laplace transform of this function call it $F(s)$ becomes:</p>
<p>$$F(s)=\frac{(s + 3)}{(s + 3)^2 + 25}$$</p>
<p>I want to plot the 3D plot of $\lvert F(s)\rvert$ as a surface above the $s$-plane.</p>
<p>I couldn't find any script. I have the version R2014a of MATLAB.
How is this done in MATLAB? Something similar to this plot:</p>
<p><a href="https://i.sstatic.net/1peFF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1peFF.png" alt="enter image description here"></a></p>
|
<pre><code>[X,Y] = meshgrid(-10:.1:10);
s=X+j*Y;
Z= abs((s+3)./((s+3).^2+25));
mesh(X,Y,Z)
</code></pre>
<p><a href="https://i.sstatic.net/q999y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q999y.png" alt="3d plot"></a></p>
| 163
|
Laplace transform
|
Laplace transform of $f\left(\frac{t - b}{a}\right)$
|
https://dsp.stackexchange.com/questions/38991/laplace-transform-of-f-left-fract-ba-right
|
<p>Consider the function $f\left(\frac{t - b}{a}\right)$. We want want to calculate its Laplace transform. There are two approaches:</p>
<ul>
<li><p>Firstly, </p>
<ol>
<li>let $g(t) = f\left(\frac ta\right)$. </li>
<li>Then $\mathcal{L}\left\{f\left(\frac{t-b}{a}\right)\right\} = \mathcal{L}\left\{g(t - b)\right\} = e^{-bs}G(s)$ and $G(s) = \mathcal{L}\{g(t)\} = \mathcal{L}\left\{f\left(\frac ta\right)\right\} = |a|F(as)$. </li>
<li>Therefore $\mathcal{L}\left\{f\left(\frac{t - b}{a}\right)\right\} = |a|e^{-bs}F(as).$</li>
</ol></li>
<li><p>Secondly, </p>
<ol>
<li>let $h(t) = f\left(\frac{t - b}{a}\right)$. </li>
<li>Then $\mathcal{L}\left\{f\left(\frac{t-b}{a}\right)\right\} = \mathcal{L}\left\{h\left(\frac ta\right)\right\} = |a|H(as)$ and $H(s) = \mathcal{L}\{h(t)\} = \mathcal{L}\left\{f\left(\frac{t - b}{a}\right)\right\} = e^{-\left(\frac ba\right) s}F(s)$. </li>
<li>Therefore $\mathcal{L}\left\{f\left(\frac{t - b}{a}\right)\right\} = |a|e^{-bs}F(as).$</li>
</ol></li>
</ul>
<p>So far so good. However, let $f(t) = e^t$. Then
$$F(s) = \frac{1}{s - 1}$$
and
$$\mathcal{L}\left\{f\left(\frac{t - b}{a}\right)\right\} = \frac{ae^{-\left(\frac ba\right)s}}{as - 1}$$
from <a href="https://www.wolframalpha.com/input/?i=Integrate%5BExp%5Bt%2Fa%20-%20b%2Fa%5D%20*%20Exp%5B-s*t%5D,%20%7Bt,%200,%20infty%7D%5D" rel="nofollow noreferrer">Wolfram Alpha</a>.</p>
<ul>
<li><p>The problem comes from the fact that we introduce new data by shifting, i.e., the function is not zero for $t < 0$. What can we do then? Can we not use the time delay property?</p></li>
<li><p>What about the Fourier transform? The arguments above are valid for the Fourier transform for $s = j\omega$ and you do not have the requirement of $f(t < 0) = 0$ when applying the shift property. But when you set $f(t) = \cos(t)$ you get:
$$\mathcal{F}\left\{f\left(\frac{t - b}{a}\right)\right\}\bigg\vert_{\omega \ge 0,\ a > 0} = \frac{1}{2}e^{\left(\frac ba\right)j\omega}\delta(\omega - 1/a)$$
So it looks like it should be $e^{-\left(\frac ba\right)s}$ instead of $e^{-bs}$. But where have I made a mistake?</p></li>
</ul>
|
<p>Your two alternate derivations of the Laplace transform (specifically the bilateral I'm referring to) of the signal $f((t - b)/a)$ seems right, both resulting in, the same , as a check.</p>
<p>$$
\mathcal{L}\{f((t - b)/a)\} = |a|e^{-bs}F(as)
$$</p>
<p>However, when you assume the signal $f(t) = e^t u(t)$ whose Laplace transform is
$$
F(s) = \frac{1}{s - 1}
$$</p>
<p>your conclusion is <strong>wrong</strong>
$$
\mathcal{L}\{f((t - b)/a)\} = |a|e^{-bs}F(as) \neq \frac{ae^{-(b/a)s}}{as - 1}
$$</p>
<p>which should be corrected as (according to your own derivation)
$$
\mathcal{L}\{f((t - b)/a)\} = |a|e^{-bs}\frac{1}{as - 1}
$$</p>
<p>So is this the <strong>problem</strong> you are referring to ?</p>
| 164
|
Laplace transform
|
When to use Fourier, Laplace and Z transforms?
|
https://dsp.stackexchange.com/questions/64539/when-to-use-fourier-laplace-and-z-transforms
|
<p>If we have an LTI system, with an input signal <span class="math-container">$x(t)$</span>, impulse response <span class="math-container">$h(t)$</span> and output <span class="math-container">$y(t)$</span>, I was under the assumption that if the input and impulse response were continuous in time, then you would use the FT on <span class="math-container">$x(t)$</span> and the Laplace transform on <span class="math-container">$h(t)$</span>, and multiply them together to find the output <span class="math-container">$Y(f)$</span> in the frequency domain.</p>
<p>Likewise, if <span class="math-container">$x[n]$</span> was a discrete input signal, <span class="math-container">$h[n]$</span> was a discrete impulse response, and <span class="math-container">$y[n]$</span> a discrete output, then I though we would use the DFT on <span class="math-container">$x[n]$</span>, z transform on <span class="math-container">$h[n]$</span>, then multiply them together to get <span class="math-container">$Y(k)$</span> which is a discrete output in the frequency domain. </p>
<p>However I've seen people saying we would use the Fourier transform on the impulse response to get the transfer functions <a href="http://cas.ee.ic.ac.uk/people/dario/files/E22/A1-Bode%20plots.pdf" rel="nofollow noreferrer">here</a> so it's kinda thrown me off a little bit. Could anyone clarify this?</p>
|
<p>It's natural consequence of applying a transform to a convolution relation. The output <span class="math-container">$y(t)$</span> of an (continuous-time) LTI system is described by a convolution integral :</p>
<p><span class="math-container">$$y(t) = h(t)\ast x(t) = \int_{-\infty}^{\infty} x(\tau) h(t-\tau) d\tau $$</span></p>
<p>And when you apply a Fourier transform on this relation, it turns out to be a multiplication in the transform domain such as:</p>
<p><span class="math-container">$$ Y(j\omega) = \mathscr{F}\{ y(t) \} = \mathscr{F}\{ h(t) \ast x(t) \} $$</span>
<span class="math-container">$$ Y(j\omega) = \mathscr{F}\{ h(t) \} \cdot \mathscr{F}\{ x(t) \} = H(j\omega)\cdot X(j\omega) $$</span></p>
<p>Similary you can also apply a Laplace transform on it:
<span class="math-container">$$ Y(s) = \mathscr{L}\{ y(t) \} = \mathscr{L}\{ h(t) \ast x(t) \} $$</span>
<span class="math-container">$$ Y(s) = \mathscr{L}\{ h(t) \} \cdot \mathscr{L}\{ x(t) \} = H(s)\cdot X(s) $$</span></p>
<p>And exactly the same happens for discrete-time LTI systems.</p>
| 165
|
Laplace transform
|
Why is a negative exponent present in Fourier and Laplace transform?
|
https://dsp.stackexchange.com/questions/19004/why-is-a-negative-exponent-present-in-fourier-and-laplace-transform
|
<p>could anyone explain why there is a need of negative exponent in fourier and laplace transform.I looked through the web but I couldn't get anything.Does anything happen if a positive exponent is placed in these transforms.</p>
<p>While looking through <a href="http://1drv.ms/1tbV45S" rel="noreferrer">http://1drv.ms/1tbV45S</a> it says that if $s>0$ it becomes a rapidly decreasing function while if $s<0$ it becomes an rapidly increasing functin of t.I couldn't understand that.Can anyone illustrate this.</p>
|
<p>Matt is correct that the sign is convention. I think that there is a reason for it beyond that though.</p>
<p>If we look at complex frequencies in the complex plane, they look like a constant vectors that rotate in one direction or another. Positive frequencies rotate counter-clockwise, negative frequencies rotate clockwise, and "0 Hz" frequencies don't rotate at all.</p>
<p><img src="https://i.sstatic.net/8RMih.png" alt="Positive frequency"></p>
<p>The Fourier transform has a negative sign to intentionally rotate in the opposite direction as the frequencies that they are "looking" for.</p>
<p><img src="https://i.sstatic.net/W9k2C.png" alt="Negative frequency"></p>
<p>The reason for the opposite rotation is that when the two frequency vectors are multiplied, their phases will repeatedly cancel out, so when the results are summed together there will be a massive vector due to all of the individual vectors lining up.</p>
<p>$$
X(f) = \sum\limits_{n=0}^{N-1}x(n)e^{-j2\pi kn/N}
$$</p>
<p><img src="https://i.sstatic.net/rtCv3.png" alt="Fourier frequency vectors"></p>
<p>This is how the Fourier transform "looks" for frequencies. If the two frequencies are the same or "close" (how close they need to be depends on the length of the DFT) they will line up well and cause a massive response in the summation. I have showed how this works for the discrete Fourier transform (DFT), but the exact same reasoning applies to the continuous transform.</p>
<p>Hopefully this explains why the Fourier transform would want the vectors rotating in the opposite direction. To be perfectly honest I don't know the Laplace transform well enough to give solid reasoning for its negative sign. Since the two transforms are very closely related though (the Laplace transform being a generalization of the Fourier transform), I assume that it is for similar reasons.</p>
| 166
|
Laplace transform
|
Why does subbing $s = j\omega$ into the Laplace transform of a cosine wave yield a purely imaginary result?
|
https://dsp.stackexchange.com/questions/58364/why-does-subbing-s-j-omega-into-the-laplace-transform-of-a-cosine-wave-yield
|
<p>The Laplace transform of a cosine starting at <span class="math-container">$t=0$</span> is given by</p>
<p><span class="math-container">$$F(s) = \frac{s}{s^2 + \omega_0^2}$$</span></p>
<p>If I sub in <span class="math-container">$s = j\omega$</span>, I get the Fourier transform of a cosine starting at <span class="math-container">$t=0$</span>:</p>
<p><span class="math-container">$$F(j\omega) = \frac{j\omega}{\omega_0^2 - \omega^2}$$</span></p>
<p>As this is a purely imaginary result it shows that the input function is made up of only sine waves and must therefore be odd. </p>
<p>This doesn't make sense as a cosine wave starting at <span class="math-container">$t=0$</span> is neither odd nor even, so <span class="math-container">$F(j\omega)$</span> should have both real and imaginary parts.</p>
<p>I find a similar issue when using a sine wave starting at <span class="math-container">$t=0$</span> which returns a purely real function of frequency:</p>
<p><span class="math-container">$$F_2(j\omega) = \frac{\omega_0}{\omega_0^2 - \omega^2}$$</span></p>
<p>What is it that I'm misunderstanding?</p>
|
<blockquote>
<p>If I sub in <span class="math-container">$s=j\omega$</span>, I get the Fourier transform of a cosine wave starting at <span class="math-container">$t=0$</span>.</p>
</blockquote>
<p>No, you don't. You can't just set <span class="math-container">$s=j\omega$</span> in an expression for the Laplace transform and expect the result to be the Fourier transform of the function. This only works if the imaginary axis is inside the region of convergence (ROC) of the Laplace transform. Since the given Laplace transform has poles on the imaginary axis, the imaginary axis is not part of the ROC (the ROC is <span class="math-container">$\textrm{Re}\{s\}>0$</span>).</p>
<p>The Fourier transform of the function</p>
<p><span class="math-container">$$x(t)=\cos(\omega_0t)u(t)\tag{1}$$</span></p>
<p>exists in a distributional sense, i.e., if we allow Dirac impulses (and, possibly, its derivatives) in the expression of the Fourier transform.</p>
<p>Using the Fourier transform of the step function</p>
<p><span class="math-container">$$\mathcal{F}\{u(t)\}=\pi\delta(\omega)+\frac{1}{j\omega}\tag{2}$$</span></p>
<p>and the modulation property of the Fourier transform</p>
<p><span class="math-container">$$\mathcal{F}\left\{e^{j\omega_0t}u(t)\right\}=\pi\delta(\omega-\omega_0)+\frac{1}{j(\omega-\omega_0)}=Y(\omega)\tag{3}$$</span></p>
<p>we can easily derive the Fourier transform of <span class="math-container">$(1)$</span> by noticing that <span class="math-container">$x(t)$</span> is the real part of the time-domain function on the left-hand side of <span class="math-container">$(3)$</span>, so its Fourier transform must be the even part of <span class="math-container">$Y(\omega)$</span>:</p>
<p><span class="math-container">$$\begin{align}X(\omega)&=\frac12\left[Y(\omega)+Y^*(-\omega)\right]\\&=\frac{\pi}{2}\left[\delta(\omega-\omega_0)+\delta(\omega+\omega_0)\right]+\frac{j\omega}{\omega_0^2-\omega^2}\tag{4}\end{align}$$</span></p>
| 167
|
Laplace transform
|
Name of property of Laplace transform
|
https://dsp.stackexchange.com/questions/84473/name-of-property-of-laplace-transform
|
<p><span class="math-container">\begin{align}
L[e^{-at}u(t)] &= \frac{1}{s+a}\\
L[\cos(\omega_{o}t)u(t)] &= \frac{s}{s^{2}+\omega^{2}_{o}}\\
L[e^{-at}\cos(\omega_{o}t)u(t)] &= \frac{s+a}{(s+a)^{2}+\omega_{o}^2}
\end{align}</span>
Everywhere <span class="math-container">$e^{-at}$</span> is multiplied with a function <span class="math-container">$x(t)$</span> and the new function becomes <span class="math-container">$$e^{-at}x(t) = y(t)\ \ {L \atop \leftrightarrow} \ \ \ Y(s) = X(s+a).$$</span></p>
<p>How is this property called?</p>
|
<p>That's the <a href="https://www.tutorialspoint.com/time-scaling-and-frequency-shifting-properties-of-laplace-transform" rel="nofollow noreferrer">Frequency Shifting Property</a></p>
| 168
|
Laplace transform
|
How to calculate the steady state response $y_{ss}(t)$ of a LTI system given the Laplace transform $Y(s)$?
|
https://dsp.stackexchange.com/questions/36020/how-to-calculate-the-steady-state-response-y-sst-of-a-lti-system-given-the
|
<p>I am given the Laplace transform of the output of a LTI system: $$Y(s) = \frac{1}{s((s+2)^2+1)}$$ Asked is what the steady state response $y_{ss}(t)$ would be. I think that $y_{ss}(t) = \lim_{t\to\infty} y(t)$, since after waiting infinit long, the system should be in steady state. (Right?)</p>
<p>I thought to use the <a href="https://en.wikipedia.org/wiki/Final_value_theorem" rel="nofollow noreferrer">final value theorem</a>:</p>
<p>$$\lim_{t\to\infty}y(t)=\lim_{s\to 0}sY(s)$$
This gives $$\lim_{s\to 0} sY(s)=\lim_{s\to 0}\frac{1}{(s+2)^2+1} = \frac{1}{5}.$$</p>
<p>This is different from $\frac{1}{10}$. When I let a computer algebra system calculate $\mathscr{L}^{-1}[Y(s)]
\bigg{|}_{t=\infty}$ I get $\frac{1}{10}$.
(I'm using wxMaxima and used <code>limit(ilt(1/(s*(s^2 + 2*s + 10)), s, t), t, inf);</code>.)</p>
<p>What am I doing wrong? Thanks in advance.</p>
|
<p>For these calculations, it is better to give the <em>Wolfram Alpha</em> answers:</p>
<p><a href="http://www.wolframalpha.com/input/?source=frontpage-immediate-access&i=inverse%20Laplace%20transform%201%2F(s*(s%5E2%20%2B%204*s%20%2B%205))" rel="nofollow noreferrer">inverse Laplace transform 1/(s*(s^2 + 4*s + 5))</a> </p>
<p>Which gives the correct expression, consistent with the <em>Final Value</em> expression of 1/5:</p>
<p>$$\frac{1}{5} - \frac{1}{5} e^{-2 t} (2 sin(t) + cos(t))$$</p>
<p>This is very different from the complex (phasor) expression, which could inexplicably have a 1/10, but when evaluated it is still 1/5:</p>
<p>$$\frac{1}{10} i (e^{(-2 - i) t} ((2 + i) e^{2 i t} + (-2 + i)) - 2 i)$$</p>
| 169
|
Laplace transform
|
Finding and displaying Laplace or Z transform ROC(region of convergence) using MATLAB
|
https://dsp.stackexchange.com/questions/83285/finding-and-displaying-laplace-or-z-transform-rocregion-of-convergence-using-m
|
<p>Is there any way, we can use MATLAB for finding and displaying Laplace or Z transform Region of convergence?</p>
|
<p>Matlab can only compute expressions for the uni-lateral (one-sided) versions of the Laplace transform and Z-transform. It doesn't explicitly determine the ROCs, but since both transforms are uni-lateral, there's only one possible choice for the ROCs: let <span class="math-container">$p_k$</span> be the poles of the Laplace or Z-transform. The ROCs are given by
<span class="math-container">$$\textrm{Laplace transform:}\quad\textrm{Re}\{s\}>\max_k\textrm{Re}\{p_k\}$$</span>
<span class="math-container">$$\textrm{Z-transform:}\quad |z|>\max_k|p_k|$$</span></p>
<p>I.e., for the (uni-lateral) Laplace transform the ROC is a right half-plane, to the right of the right-most pole, and for the (uni-lateral) Z-transform, the ROC is the region outside the circle (centered at the origin) with its radius equal to the maximum pole magnitude.</p>
| 170
|
Laplace transform
|
Wavelet transform in control systems
|
https://dsp.stackexchange.com/questions/18053/wavelet-transform-in-control-systems
|
<p>In control systems, the Laplace transform is often used to analyze the stability and the performance of <a href="http://en.wikipedia.org/wiki/LTI_system_theory" rel="nofollow">LTI system</a>. For instance, the LTI system is stable if and only if the <a href="http://en.wikipedia.org/wiki/Transfer_function" rel="nofollow">transfer function</a>, which is the quotient between the Laplace transform of the output of the system and the Laplace transform of the input, has all of its poles in the left half complex plane. </p>
<p>Does wavelet transforms also found applications in analysis or design of control systems?</p>
|
<p>In the paper <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6811177" rel="nofollow">Haar-Based Stability Analysis of LPV Systems</a>, the Haar wavelet transform theory have been used to design linear matrix inequalities (LMIs) to analyze the stability of <a href="https://en.wikipedia.org/wiki/Linear_parametric_varying_control" rel="nofollow">LPV systems</a>. </p>
<p>As the resolution level of the Haar wavelet increases, the number of variables and rows of the designed LMI increases and the feasibility of this LMI becomes a less conservative condition for the stability of the LPV system. Although the stability of LPV systems using LMIs that become less conservatives with the increase of variables and/or rows have already been proposed in the literature, the Haar-based approach can handle a larger class of parametric dependencies as well as non-convex parametric domains. </p>
| 171
|
Laplace transform
|
Transfer function and Laplace domain
|
https://dsp.stackexchange.com/questions/84485/transfer-function-and-laplace-domain
|
<p>If we give a input <span class="math-container">$x(t)=u(t)$</span> to a system <span class="math-container">$\mathcal{S}$</span> we get an output <span class="math-container">$y(t) = e^{-t} u(t)$</span>.<br />
After we Laplace-transform both the input and the output we get the transfer function
<span class="math-container">$$H(s) = 1-\frac{1}{s+1}$$</span> which in the time domain is:
<span class="math-container">$$h(t) = \delta(t)-e^{-t}u(t)$$</span></p>
<p>But why do we need to transform everything in the Laplace domain to find the transfer function? What does the Laplace domain describe fundamentally? Why aren't we able to find the transfer function by division of output with input in the time domain?</p>
|
<p>First of all it's important to understand that this is all about <em>linear and time-invariant (LTI)</em> systems. Otherwise, you can't generally use a transfer function to characterize a system. So if you have some input <span class="math-container">$x(t)$</span> and some corresponding system output <span class="math-container">$y(t)$</span>, you generally can't say much about the system, <em>unless</em> you know that it is LTI.</p>
<p>The transfer function of an LTI system is defined in the frequency domain, not in the time domain. The transfer function <span class="math-container">$H(s)$</span> relates the Laplace transforms of the output and input signals:</p>
<p><span class="math-container">$$Y(s)=H(s)X(s)\tag{1}$$</span></p>
<p>where <span class="math-container">$X(s)$</span> and <span class="math-container">$Y(s)$</span> are the Laplace transforms of the input and output signal, respectively, and <span class="math-container">$H(s)$</span> is the system's transfer function. The nice thing about Eq. <span class="math-container">$(1)$</span> is that input and output are related by multiplication. The reason for this is that the Laplace transform converts convolution into multiplication, and convolution is the process which relates the input signal to the output signal of an LTI system in the time domain:</p>
<p><span class="math-container">$$y(t)=(x\star h)(t)=\int_{-\infty}^{\infty}h(\tau)x(t-\tau)d\tau\tag{2}$$</span></p>
<p>Eq. <span class="math-container">$(2)$</span> is just the inverse Laplace transform of <span class="math-container">$(1)$</span>. Note that the impulse response <span class="math-container">$h(t)$</span> and the transfer function are related by the Laplace transform. Both functions completely the characterize the relation between input and output of an LTI system.</p>
<p>Given input and output, it is usually much simpler to determine the transfer function, and from it the impulse response, than computing the impulse response directly in the time domain. Determining the impulse response in the time domain would mean to solve <span class="math-container">$(2)$</span> for <span class="math-container">$h(t)$</span>, given <span class="math-container">$x(t)$</span> and <span class="math-container">$y(t)$</span>. This is obviously much harder than to compute <span class="math-container">$H(s)$</span> from <span class="math-container">$(1)$</span> (assuming that the Laplace transforms <span class="math-container">$X(s)$</span> and <span class="math-container">$Y(s)$</span> are easy to compute).</p>
<p>And, answering your last question as to why we can't find the transfer function by dividing in the time domain: because that's not what LTI systems do; their inputs and outputs are not related via multiplication in the time domain, but by convolution. Here's where the Laplace transform comes into play, it converts convolution into multiplication.</p>
| 172
|
Laplace transform
|
Step response of a given input and output (Laplace or Fourier)
|
https://dsp.stackexchange.com/questions/81291/step-response-of-a-given-input-and-output-laplace-or-fourier
|
<p>I am trying to calculate the step response of the following given:
Should I use Laplace transform or Fourier transform?
<a href="https://i.sstatic.net/36JZu.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/36JZu.jpg" alt="enter image description here" /></a></p>
| 173
|
|
Laplace transform
|
Laplace transform of this simple parallel RLC circuit? (For audio speaker simulation ...)
|
https://dsp.stackexchange.com/questions/91516/laplace-transform-of-this-simple-parallel-rlc-circuit-for-audio-speaker-simula
|
<h2>SPEAKER AS RLC CIRCUIT</h2>
<p><a href="https://circuitdigest.com/electronic-circuits/simulate-speaker-with-equivalent-rlc-circuit" rel="nofollow noreferrer">I read this article here</a> which demonstrates a simulation of a speaker as a simple RLC circuit where the RLC components are in parallel:</p>
<p><a href="https://i.sstatic.net/6sjXM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6sjXM.jpg" alt="enter image description here" /></a></p>
<h2>MY GOAL</h2>
<p>I am interested in creating a sample domain C++ script that can take audio input samples at a given sample rate and return the processed audio "output" by the "speaker".</p>
<p>I understand the R1 & L1 in series just create some phase rotation and nothing else (correct?) and if so, I have no interest in these components. Thus what I would want to simulate would be for example with rough values as shown here:</p>
<p><a href="https://i.sstatic.net/sczom.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sczom.png" alt="enter image description here" /></a></p>
<p>I believe the two capacitors C1 and C2 in parallel simply add together to make a new C value (summed values?), in which case it is then just a Resistor <span class="math-container">$R$</span>, Inductor <span class="math-container">$L$</span>, and Capacitor <span class="math-container">$C$</span> in parallel.</p>
<h2>MY PROBLEM</h2>
<p>I would think this would be easy but I can't find any simple example online of this system solved in a Laplace function.</p>
<p>I believe what I need to do is solve the Laplace transfer function. Then I can use <a href="https://lpsa.swarthmore.edu/LaplaceZTable/LaplaceZFuncTable.html" rel="nofollow noreferrer">a reference like this one</a> for substituting terms of <code>z</code>.</p>
<p>Or I think I'm supposed to do a substitution as I was instructed previously <a href="https://dsp.stackexchange.com/questions/62360/help-with-my-first-simple-z-transform?rq=1">here</a>, substituting one of:</p>
<ul>
<li>Backward Euler: <span class="math-container">$s≃\frac{1−z^{−1}}{T}$</span></li>
<li>Forward Euler: <span class="math-container">$s≃\frac{z−1}{T}$</span></li>
<li>Tustin transformation: <span class="math-container">$s≃\frac{2(z−1)}{T(z+1)}$</span></li>
</ul>
<p>where <span class="math-container">$T$</span> equals <code>1/sampleRate</code>.</p>
<p>Then I need to convert this into <code>n</code> domain of samples for C++ coding (or any other simple language I can rewrite to C++), where <code>n</code> is the current sample, <code>n_1</code> is the last output, and <code>n_2</code> is the output two samples prior.</p>
<p>I have no formal education in this though and it has been 3-5 years since the last time I did this (and I have only done it a few times).</p>
<p>Do I have the right idea? Any help with how this would work?</p>
|
<p>Do what r b-j suggests: get a Laplace domain equivalent first, and then transform from <span class="math-container">$s$</span> to <span class="math-container">$z$</span>.</p>
<p>The <a href="https://circuit-analysis.github.io/chapter-11.html" rel="nofollow noreferrer">equivalent circuit</a> will be something like:</p>
<p><span class="math-container">\begin{align}
H(s) &= R_1 + sL_1 + \frac{1}{\frac{1}{\frac{1}{s(C_1+C_2)}} + \frac{1}{s L_2} + \frac{1}{R_2}}\\
&= R_1 + sL_1 + \frac{1}{s(C_1+C_2) + \frac{1}{s L_2} + \frac{1}{R_2}}\\
&= R_1 + sL_1 + \frac{s}{s^2(C_1+C_2) + \frac{s}{R_2}+ \frac{1}{L_2} }\\
\end{align}</span></p>
<p>and from there, just use your Tustin transformation.</p>
| 174
|
Laplace transform
|
How to transform a Fractional Order Laplace Transfer Function into a digital filter?
|
https://dsp.stackexchange.com/questions/45918/how-to-transform-a-fractional-order-laplace-transfer-function-into-a-digital-fil
|
<p>I'm working with loudspeaker impedance analysis. Electrical behavior of loudspeakers can be modeled with RLC networks. But real loudspeakers have components, that exhibit some non-linear and frequency dependent behaviors, that make them difficult to model with simple LTI systems.</p>
<p>One of the problems with loudspeakers is, the voice coil inductance decreases with frequency. (ignoring low frequency behavior, for this question, and focusing on higher freq's). Understanding impedance Z(s) as a Laplace Transfer Function between current and voltage signals, V(s) = Z(s) * I(s).
Speaker impedance does not follow the simple formula of a RL series circuit, Z(s) = Re + (L * s). Neither behaves like a plain resistance Z(s) = Re . </p>
<p>Instead it follows a formula where the s is raised to a non-integer exponent. So it would be Z(s) = Re + (L * (s^a)).</p>
<p>Impedance rises with frequency, not proportionally with f, but proportionally with f^a. Where f = frequency, and a is a real fractional number.</p>
<p>in practice this number called a is around 0.7 (depending on spkr). This phenomenon occurs because frequency dependency of magnetic permeability of the iron core.</p>
<p>following picture describes the issue:
<a href="https://i.sstatic.net/xpvQn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xpvQn.jpg" alt="Speaker Impedance Curve"></a></p>
<p>See at 5KHz, impedance is 17 Ohms. At twice this frequency, 10KHz. With a simple inductance, a doubled impedance would be expected, 34 Ohms. But that does not occur. Impedance is increased but not doubled. It is around 27 Ohms. Which is a smaller value than expected with a plain inductance.</p>
<p>Now, what I want to do is to transform the Laplace TF Z = Re + L * (s^0.7), to a discrete-time, Z-transform TF, and then to a IIR digital filter. That would allow to see and analyze the current waveform, from a given voltage signal. Voltage signal is a music MP3 file. Because audio amps outputs are voltage controlled.</p>
<p>With integer exponents in the Laplace TF, is very easy to transform with the Match-Z or Tustin methods. But I have no clue how to do it with fractional exponents. Suppose I want to do Match-Z, how I find the roots? Suppose Tustin Method, replacing s with ((z-1)/(z+1)), How can I raise the ((z-1)/(z+1)) term to a 0.7 exponent???</p>
<p>I know this is a bit hard. Thanks in advance.</p>
| 175
|
|
Laplace transform
|
Question about z transform
|
https://dsp.stackexchange.com/questions/27385/question-about-z-transform
|
<p>After studying z transform from different books and literature on internet I want to ask few which makes me confuse. </p>
<p>a) From the Discrete Time Fourier Transform we have drive equation for z transform. $$ X(z)= \sum _ {n=-\infty}^{+\infty} x[n]z^{-n}$$ where $z$ is represented in polar form $z=re^{j\omega}$
I want to know that why we represent $z$ in polar coordinates? as in some books it is written than $z$ is complex $z=\sigma + j \omega$</p>
<p>b) ROC for $z$ transform is same as Laplace Transform? In laplace transform we check that direction of $t$ (i.e. if we have $u(t)$ than the $Re[s] > a$)?</p>
|
<p>I think it is common (in signal processing books) to write the z transform in polar form, to make clear its relationship with the fourier transform, that is
z-transform equal to fourier transform on the unit circle, that is when r=1, then:</p>
<p>Ztransf-> $z=r*e^{jw}=e^{jw}$<- fourier transform or just</p>
<p>$z=e^{jw}$</p>
| 176
|
Laplace transform
|
Laplace of step and integration are same?
|
https://dsp.stackexchange.com/questions/42723/laplace-of-step-and-integration-are-same
|
<p>Why do we have Laplace transform of a step function and integrator is same.</p>
<p>\begin{align}
\mathcal L\left[u(t)\right] &= \frac 1s\\
\mathcal L \left[ \int dt\right] &= \frac 1s
\end{align}</p>
<p>Please clear my doubt on this.</p>
|
<p>This is because the impulse response of an integrator is $h(t)=u(t)$. The output which is the convolution with the impulse respoponse is
$$y(t)=\int_{-\infty}^{\infty}x(\tau)h(t-\tau)d\tau$$
and with $h(t)=u(t)$ it becomes
$$\begin{align}
y(t)&=\int_{-\infty}^{\infty}x(\tau)u(t-\tau)d\tau\\
&=\int_{-\infty}^{t}x(\tau)d\tau\tag{1}\end{align}$$
where $(1)$ is resulted from the fact that $$u(t-\tau)=\begin{cases}0& \forall \tau>t\\
1 & \text{otherwise}\end{cases}$$
The transfer function (which is the Laplace transform of impulse response) is $$H(s)=\mathcal{L}\{h(t)=u(t)\}=\frac{1}{s}$$</p>
| 177
|
Laplace transform
|
ROC of the function in the problem 9.14 of Oppenheim's Signals and Systems textbook
|
https://dsp.stackexchange.com/questions/63377/roc-of-the-function-in-the-problem-9-14-of-oppenheims-signals-and-systems-textb
|
<blockquote>
<p><a href="https://i.sstatic.net/7paLk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7paLk.jpg" alt="Problem 9.14"></a></p>
</blockquote>
<hr>
<p>I have solved the problem 9.14 in Oppenheim's Signals and Systems textbook, but my solution and the one in Slader is different. Problem is given above. <a href="https://www.slader.com/textbook/9780138147570-signals-and-systems/723/problems/14/" rel="nofollow noreferrer">And Slader solution is here.</a></p>
<p>I have also attached my solution below. Laplace transforms are the same but ROC in the Slader solution and mine is different. My question is how can this Laplace transform have an ROC as Re{s}>0. This should be wrong since this region contains the two poles of the Laplace transform which should not be the case for a rational Laplace transform.</p>
<p>Thanks in advance.
<a href="https://i.sstatic.net/g7Jrj.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g7Jrj.jpg" alt="enter image description here"></a></p>
|
<p>You're right, the ROC of the Laplace transform of a two-sided signal is a strip in the complex plane. In your case, the imaginary axis is inside the ROC, and the ROC is limited by the poles in the right and left half-planes. If the ROC were the right half-plane, the signal would be right-sided, which is clearly not the case.</p>
| 178
|
Laplace transform
|
From where this Laplace transform for tracking error came?
|
https://dsp.stackexchange.com/questions/44939/from-where-this-laplace-transform-for-tracking-error-came
|
<blockquote>
<p>System for estimating the tracking error in an A-D converter.
<a href="https://i.sstatic.net/9Evhk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Evhk.png" alt="enter image description here"></a></p>
<p>Let $e_i(t)$ be ramp function with slope $e_i'$ and assume that the sampling switch is closed at $t = 0$, at which time $e_i = e_0$. Then </p>
<p>$$E_i(s) - E_0(s) = \frac{e_i^2}{s^2}-\frac{(e_i'/s^2)(1/(sC))}{R + 1/(sC)} \tag{1}\label{A}.$$</p>
</blockquote>
<p>I understand that transfer function of this system is $\frac{(1/(sC))}{R + 1/(sC)},$ and it is equal to $\frac{E_0(s)}{E_i(s)},$ but from where $(\ref{A})$ came?</p>
|
<p>You correctly noted
$$
\frac{E_o(s)}{E_i(s)} = \frac{1/sC}{R+1/sC}.
$$</p>
<p>But it also says $e_i(t)$ is a ramp with slope $e_i'$, so its Laplace transform is given by $E_i(s) = \frac{e_i'}{s^2}$.</p>
<p>Now just do some algebra:</p>
<p>$$
\frac{E_i(s)-E_o(s)}{E_i(s)} = 1-\frac{E_o(s)}{E_i(s)} = 1-\frac{1/sC}{R+1/sC}
$$
which gives
\begin{eqnarray}
E_i(s)-E_o(s) &=& \left( 1-\frac{1/sC}{R+1/sC} \right) E_i(s)\\
& =& \left(1-\frac{1/sC}{R+1/sC}\right)\frac{e_i'}{s^2}\\
&=& \frac{e_i'}{s^2}-\frac{(e_i'/s^2)(1/sC)}{R+1/sC}.
\end{eqnarray}</p>
| 179
|
Laplace transform
|
Why Z-transform is considered as separate transform?
|
https://dsp.stackexchange.com/questions/24099/why-z-transform-is-considered-as-separate-transform
|
<p>The mathematical formula of the Laplace and Z transforms are same with just one difference. I.e. in the first we use $t$ for continuous-time signal and in the latter uses $n$ for discrete-time signal. I don't think that there are any other differences. </p>
<p>While discussing Fourier transform for continuous-time (normally analog) signals, we use the continuous Fourier transform and for discrete-time (most often digital) signals we use the discrete-time Fourier transform (DTFT). </p>
<p>So my question is why the Z transform is considered a separate transform? Why it is not named the <em>"Discrete Laplace Transform"</em> ? </p>
|
<p>There is indeed a transform called <em>discrete Laplace transform</em> and it is of course closely related to the $\mathcal{Z}$-transform. The (unilateral) discrete Laplace transform of a sequence $f_n$ is defined by (cf. <a href="https://books.google.nl/books?id=wCIGCAAAQBAJ&lpg=PA78&ots=FDqF81ObPp&dq=%22discrete%20laplace%20transform%22&pg=PA77#v=onepage&q&f=false" rel="nofollow">link</a>)</p>
<p>$$\mathcal{L}_T\{f_n\}=\sum_{n=0}^{\infty}f_ne^{-snT}$$</p>
<p>with some $T>0$. The discrete Laplace transform can be interpreted as the Laplace transform of a sampled function $f(t)\cdot\sum_n\delta(t-nT)$ with $f_n=f(nT)$.</p>
<p>In practice it is not convenient to have the factor $e^{sT}$ appear everywhere. If one substitutes $z=e^{sT}$, the discrete Laplace transform is called (unilateral) $\mathcal{Z}$-transform:</p>
<p>$$\mathcal{Z}\{f_n\}=\sum_{n=0}^{\infty}f_nz^{-n}$$</p>
<p>The same can obviously be done for the bilateral versions of the transforms, where the integrals and the sums start at $-\infty$.</p>
| 180
|
Laplace transform
|
Can we tell if a system is linear and time-invariant from its frequency response?
|
https://dsp.stackexchange.com/questions/78176/can-we-tell-if-a-system-is-linear-and-time-invariant-from-its-frequency-response
|
<p>Given a system with a known frequency response in the S-domain. Is there a way to find whether the system is linear and time invariant?</p>
<p>My current understanding is that we need to take the inverse Laplace transform of the system and prove linearity in the time domain.</p>
<p>Edit:<br />
As per the comments, given that the existence of Laplace transform imply linearity,
Is there an intuition on why the mere existence of the Laplace transform of a system would imply linearity?</p>
|
<blockquote>
<p>Given a system with a known frequency response in the S-domain. Is there a way to find whether the system is linear and time invariant?</p>
</blockquote>
<p>If by "known frequency response in the s-domain" you mean a Laplace transfer function* as a ratio of polynomials in s -- yes. Laplace transform analysis on a system is not valid unless the system is linear and time invariant. So, given a <em>valid</em> Laplace transform, the system is presumed to be linear and time invariant.</p>
<p>If you mean a <em>measured</em> frequency response -- no, that's a different animal. You can measure the frequency response of any system, no matter how time-varying or nonlinear. Whether such measurements are meaningful, or can be used to determine the degree to which** the system is linear and time-invariant, depends on the system and the care with which one does the measurements.</p>
<p>* Edited from the original "Laplace Transform" based on comments.</p>
<p>** No physical system is <em>entirely</em> linear and time-invariant; a few can be treated as such for all earthly purposes, and many can be very profitably approximated as such, or linear analysis would be a mathematical sideline, not a mainstream pursuit.</p>
| 181
|
Laplace transform
|
Bilateral $\mathcal Z$-transform of exponential
|
https://dsp.stackexchange.com/questions/25489/bilateral-mathcal-z-transform-of-exponential
|
<p>We all know that $a^nu(n)$ has unilateral $\mathcal Z$-transform. But what is the $\mathcal Z$-transform of $a^n$? (bilateral) When i tried to solve, i got answer as 'zero'.</p>
<p>But bilateral Laplace transform of $e^t$ doesn't exist. Both are exponentials in discrete and continuous domain respectively. Considering the similarity between Laplace and $\mathcal Z$-transform, how to explain the above problem? </p>
<p>Below, this is how I got 'zero'</p>
<p>$$a^n=a^nu(n) + a^nu(-n-1),$$</p>
<p>Now taking $\mathcal Z$-transform on both sides we get
$$\frac{z}{z-a}\quad \text{and}\quad \frac{-z}{z-a}$$ respectively which add to 'zero'</p>
|
<p>In complete analogy with the bilateral Laplace transform of $x(t)=e^{-at}$ (which doesn't exist), the bilateral $\mathcal{Z}$-transform of $a^n$ doesn't exist either. The series</p>
<p>$$\sum_{n=-\infty}^{\infty}a^nz^{-n}$$</p>
<p>converges nowhere, simply because $a^n$ grows without bounds for $n\rightarrow -\infty$ if $|a|<1$, or for $n\rightarrow\infty$ if $|a|>1$. Of course, for $|a|=1$ there series doesn't converge either.</p>
<p><strong>EDIT:</strong></p>
<p>As for your computation of the $\mathcal{Z}$-transform of $a^n$, the mistake lies in the fact that in addition to the algebraic expression of the transform you also need to consider the region of convergence. If you split $a^n$ (as you did) as</p>
<p>$$a^n=a^nu[n]+a^nu[-n-1]\tag{1}$$</p>
<p>you can compute the $\mathcal{Z}$-transform of both right-hand side expressions separately:</p>
<p>$$\mathcal{Z}\{a^nu[n]\}=\frac{z}{z-a},\quad |z|>|a|\\
\mathcal{Z}\{a^nu[-n-1]\}=-\frac{z}{z-a},\quad |z|<|a|\tag{2}$$</p>
<p>Note that the region of convergence (ROC) for the first part is outside the circle with radius $|a|$, whereas the ROC of the second part is inside the circle with radius $|a|$. The ROC of the total expression would be the overlap of the two ROCs, which is zero. Consequently, the sum doesn't converge anywhere and the $\mathcal{Z}$-transform of the total expression doesn't exist.</p>
| 182
|
Laplace transform
|
Laplace domain transfer function from system sampled at discrete times
|
https://dsp.stackexchange.com/questions/88214/laplace-domain-transfer-function-from-system-sampled-at-discrete-times
|
<p>I'm trying to understand an analysis of a sampled continuous time system in the Laplace domain. The source analysis is <a href="http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee240_sp10/lectures/Lecture22_Offset_Cancel_2up.pdf" rel="nofollow noreferrer">here</a> (PDF page 6, slide marked 11); I'll explain further below. Suppose I have a system which takes the difference between its current input and its input sampled from half a clock cycle ago, so that:</p>
<p><span class="math-container">$$
V_o(kT)=V_f(kT)-V_f(kT-\tfrac{T}{2}) \tag{1}
$$</span></p>
<p>Here <span class="math-container">$V_o(t)$</span> and <span class="math-container">$V_f(t)$</span> are continuous-time functions, <span class="math-container">$k$</span> is a nonnegative integer, and <span class="math-container">$T$</span> is a positive time interval.</p>
<p>I think that the system function is likely properly examined in terms of a Z-transform, but I want to understand how it could be analyzed via Laplace transform. The source I cited above claims that you can simply treat <span class="math-container">$kT$</span> as a continuous time variable and end up with (taking the Laplace transform of the RHS above)</p>
<p><span class="math-container">$$
V_f(s)(1-e^{-sT/2}) \tag{2}
$$</span></p>
<p>A (maybe small) detail that confuses me here is that this suggests that the original CT signal had a shift of the form <span class="math-container">$V_f(t-T/2)$</span>, when actually if you do a substitution <span class="math-container">$t'=kT$</span>, we have <span class="math-container">$V_f(t'-\frac{1}{2k}$</span>), which doesn't look right. Is there a way to show that the RHS of equation 1 has the Laplace transform of equation 2, or is there an approximation being made here?</p>
<p>EDIT: <a href="https://i.imgur.com/3UKSidn.jpeg" rel="nofollow noreferrer">I've made a timing diagram</a> to try to make the situation more clear. This is a filled-in version of slide 10 from my source above, but without the gain block A which I don't think makes a difference here. In this diagram I tried to make <span class="math-container">$V_f(t)$</span> a very slow-varying signal (slow relative to input sinusoid <span class="math-container">$V_{in}$</span>). <span class="math-container">$V_1$</span> is the output of the sample and hold block.</p>
<p>At time <span class="math-container">$t=kT-T/2$</span>, the first phase ends and the sample/hold block holds the value of <span class="math-container">$V_f(t=kT-T/2)$</span>. At the start of phase 2, <span class="math-container">$V_{in}$</span> is connected to the system and the output becomes <span class="math-container">$V_o(t)=V_{in}(t)+V_f(t)-V_f(kT-T/2)$</span>. Phase 2 ends at <span class="math-container">$t=kT$</span>, and then the output gets held at <span class="math-container">$V_o(kT)=V_{in}(kT)+V_f(kT)-V_f(kT-T/2)$</span>.</p>
|
<p>There are two ways of analyzing the given system. First, we could simply ignore the sampling process and treat the system as a continuous-time system. This is possible if the input signals are sufficiently band-limited, and if the sampling rate satisfies the Nyquist criterion. In this case we simply have the difference between the original noise signal <span class="math-container">$v_f(t)$</span> and a shifted version <span class="math-container">$v_f(t-T/2)$</span>, which in the Laplace domain corresponds to</p>
<p><span class="math-container">$$\mathcal{L}\big\{v_f(t)-v_f(t-T/2)\big\}=V_f(s)\left(1-e^{-sT/2}\right)$$</span></p>
<p>The second way to analyze the system is in the discrete-time domain. Note however that in this case we need a sampling interval of <span class="math-container">$T/2$</span> because we need both, the sampled signal <span class="math-container">$v_f(kT)$</span> and the sampled version of the shifted signal <span class="math-container">$v_f(kT-T/2)$</span>. This is not possible if the sampling interval is <span class="math-container">$T$</span>.</p>
<p>Modeling sampling mathematically by multiplication with an impulse train, and using a sampling interval of <span class="math-container">$T'=T/2$</span> results in the following representation of a sampled signal:</p>
<p><span class="math-container">$$v_f(t)\cdot\sum_{n=-\infty}^{\infty}\delta(t-nT')=\sum_{n=-\infty}^{\infty}v_f(nT')\delta(t-nT')\tag{1}$$</span></p>
<p>The Laplace transform of <span class="math-container">$(1)$</span> is</p>
<p><span class="math-container">\begin{align*}
\mathcal{L}\left\{\sum_{n=-\infty}^{\infty}v_f(nT')\delta(t-nT')\right\} &=\int_{-\infty}^{\infty}\sum_{n=-\infty}^{\infty}v_f(nT')\delta(t-nT')e^{-st}dt\\
&=\sum_{n=-\infty}^{\infty}v_f(nT')e^{-snT'}\tag{2}
\end{align*}</span></p>
<p>The expression on the right-hand side of <span class="math-container">$(1)$</span> is called the <em>discrete Laplace transform</em>, and it's just the same as the <span class="math-container">$\mathcal{Z}$</span>-transform with <span class="math-container">$z=e^{sT'}$</span>. Hence, a delay of one sample corresponds to a multiplication with <span class="math-container">$z^{-1}=e^{-sT'}=e^{-sT/2}$</span>.</p>
| 183
|
Laplace transform
|
Discrete version of this transform?
|
https://dsp.stackexchange.com/questions/86922/discrete-version-of-this-transform
|
<p>I have the following transform for <span class="math-container">$t>0, a_i>0$</span>
<span class="math-container">$$f(t)=\sum_{i=0}^d a_i \exp(-t a_i)$$</span></p>
<p>And I need to invert it for a set of target values <span class="math-container">$b$</span>:</p>
<p>Find <span class="math-container">$(t_0,t_1,\ldots,t_d)$</span> such that <span class="math-container">$f(t_0),f(t_1),\ldots,f(t_d)=(b_0,b_1,\ldots,b_d)$</span></p>
<p>Is there a way to express either forward or backward operation in terms of some standard transform that's practical to use?</p>
<p>In the continuous world, this <a href="https://math.stackexchange.com/questions/4242534/inverse-of-integral-transform-fs-int-0-infty-gx-exp-s-gx-mathbbdx/4244315#4244315">can be turned</a> into Laplace Transform. However, this involves taking derivatives and inverse functions, so I'm not sure what's the best way to do this in discrete world.</p>
<p>Continuous version is written in terms of Laplace transform as follows:</p>
<p><span class="math-container">$$f(t)=\int_0^\infty g(i) \exp(-t g(i))=\mathcal{L}(yg^{-1}(y)'\mathbb{I}_y)$$</span></p>
<p>where <span class="math-container">$g^{-1}(y)'$</span> represents derivative of functional inverse of <span class="math-container">$g$</span> and <span class="math-container">$\mathbb{I}_y$</span> is 1 for <span class="math-container">$y$</span> between 0 and 1 and <span class="math-container">$0$</span> otherwise</p>
| 184
|
|
Laplace transform
|
Why do these 2 methods give different solutions?
|
https://dsp.stackexchange.com/questions/40087/why-do-these-2-methods-give-different-solutions
|
<p><a href="https://i.sstatic.net/bjZR8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjZR8.png" alt="enter image description here"></a></p>
<p>I need to solve what is underlined in red for $x_i$, nut currently I'm interested in the right side of the equation only.</p>
<p>On the left I sarted by doing the Laplace transform of $x_u'$ and $x_u$, and this gives the correct solution. </p>
<p>On the right side I tried to derive $x_u$ first, plug everything into the equation and then do the Laplace transform, and this gives a completely different solution. Is this method incorrect? Or am I doing something wrong? </p>
|
<p>The problem is that you took the derivative of the function</p>
<p>$$\hat{x}_u(t)=2e^{-3t}-e^{-4t}\tag{1}$$</p>
<p>whereas using the Laplace transform you implicitly assumed that $x_u(t)$ equals zero for $t<0$:</p>
<p>$$x_u(t)=\hat{x}_u(t)u(t)=(2e^{-3t}-e^{-4t})u(t)\tag{2}$$</p>
<p>where $u(t)$ is the unit step function.</p>
<p>If you take the derivative of $(2)$ then you'll get the same result as with the Laplace transform, taking into account that the derivative of the step function $u(t)$ is the Dirac delta impulse $\delta(t)$:</p>
<p>$$\frac{dx_u(t)}{dt}=\frac{d\hat{x}_u(t)}{dt}u(t)+\hat{x}_u(t)\delta(t)=\frac{d\hat{x}_u(t)}{dt}u(t)+\hat{x}_u(0)\delta(t)\tag{3}$$</p>
<p>where I've used the fact that for any function $f(t)$ that is continuous at $t=0$ we have $f(t)\delta(t)=f(0)\delta(t)$. I trust that you can take it from here.</p>
| 185
|
Laplace transform
|
S domain vs frequency domain?
|
https://dsp.stackexchange.com/questions/84250/s-domain-vs-frequency-domain
|
<p>Laplace domain is also known as <em>"s domain"</em>.</p>
<p>Is there any difference between <em>"s domain"</em> and <em>"frequency domain"</em>? Can we use both terms interchangeably?</p>
<p>If we want to convert a time domain signal to frequency domain, can we use Laplace transform?</p>
|
<p>The s domain is synonymous with the "complex frequency domain", where time domain functions are transformed into a complex surface (over the s-plane where it converges, the "Region of Convergence") showing the decomposition of the time domain function into decaying and growing exponentials of the form <span class="math-container">$e^{st}$</span> where <span class="math-container">$s$</span> is a complex variable. In terms of its real and imaginary components, <span class="math-container">$s=\sigma + j\omega$</span>. Thus we have <span class="math-container">$e^{st}=e^{(\sigma + j\omega)t} = e^{\sigma t}e^{j\omega t}$</span>.</p>
<p><span class="math-container">$e^{\sigma t}$</span> for real <span class="math-container">$\sigma$</span> is simply a decaying or growing exponential. <span class="math-container">$e^{j\omega t}$</span> for real <span class="math-container">$\omega$</span> is a phasor spinning at a constant rate in time with magnitude 1 (Magnitude 1, angle <span class="math-container">$\omega t$</span>.) For a more complete intuition on the <span class="math-container">$e^{j\omega t}$</span> representation of what a single frequency tone is (that is used throughout DSP), please refer to <a href="https://dsp.stackexchange.com/questions/78906/qualitative-explanation-of-fourier-transform/78911#78911">this link</a>.</p>
<p>The more commonly used unilateral Laplace Transform assumes the time domain is causal and therefore 0 for all time less than zero. If we restrict <span class="math-container">$s$</span> to be the <span class="math-container">$j\omega$</span> axis, then we would get the Fourier Transform with the same causality assumption (if the unilateral Laplace Transform was used).</p>
<p><a href="https://i.sstatic.net/F6wwi.png" rel="noreferrer"><img src="https://i.sstatic.net/F6wwi.png" alt="enter image description here" /></a></p>
<p>It is convenient and sufficient to just show on the s-plane the locations of the extreme singularities; the poles and zeros (where poles are where the surface goes to infinity and zeros are where the surface goes to zero) as every other point on the surface as a complex value with magnitude and phase is uniquely determined by the poles and zeros. Below is an example showing a pole at the origin. If the function is causal then the Region of Convergence (ROC) is the right half plane to the right of the pole. I show a plot below of the magnitude of <span class="math-container">$1/s$</span> for all values of <span class="math-container">$s$</span>, showing the surface I mention (not shown, but the surface would also have a phase at each point on the surface). Due to the ROC, this is only valid on half the plane.</p>
<p><a href="https://i.sstatic.net/MpWqe.png" rel="noreferrer"><img src="https://i.sstatic.net/MpWqe.png" alt="s-plane" /></a></p>
<p>Another example showing both the utility of the Laplace Transform and its relation to the Fourier Transform is shown below. Here we have the Laplace Transform of the impulse response for a filter. The Fourier Transform of a filter's impulse response is its frequency response, but with the Laplace Transform and specifically the locations of the poles and zeros on the s-plane we have much more information beyond what we can get out of Fourier alone. This includes stability, response and settling time, and insights into further tuning and adjustment. If we slice the given surface along the <span class="math-container">$j\omega$</span> axis, we would get the Fourier Transform from the Laplace Transform, as shown in the upper right hand corner.</p>
<p><a href="https://i.sstatic.net/4Vasd.png" rel="noreferrer"><img src="https://i.sstatic.net/4Vasd.png" alt="Filter Example" /></a></p>
<p>In the plot I use the term "correlation" here somewhat loosely, as it provides great intuition. We can think of the Laplace Transform as correlating our function <span class="math-container">$x(t)$</span> to all possible values of <span class="math-container">$e^{st}$</span> and showing us in the complex surface, the relative magnitude (strength) and phase for all possible values of <span class="math-container">$e^{st}$</span> in <span class="math-container">$x(t)$</span>. What is actually occurring is better termed as a mapping from one space to another, but as I detail further <a href="https://dsp.stackexchange.com/questions/78906/qualitative-explanation-of-fourier-transform/78911?noredirect=1#comment167118_78911">here</a>, the operation is indeed very similar to correlation as an integration of complex conjugate products, with the distinction here that both the real and imaginary terms of the exponent for the basis function <span class="math-container">$e^{st}$</span> are negated in the Laplace Transform given as:</p>
<p><span class="math-container">$$X(s) = \int_0^\infty x(t)e^{-st}dt$$</span></p>
<p>For the curious, the above filter is a 2nd order filter, with two complex poles in the left half plane (so stable) at <span class="math-container">$s=-0.2\pm j0.5$</span>. The impulse response is:</p>
<p><span class="math-container">$$x(t) = 2e^{-0.2t}\sin(0.5t), t\ge 0$$</span></p>
<p>Which results in the Laplace Transform as given in the plot.</p>
<p>The two poles at <span class="math-container">$s=-0.2\pm j0.5$</span> in the resulting Laplace Transform suggest that the impulse response <span class="math-container">$x(t)$</span> has two fundamental components consisting of decaying exponentials given by <span class="math-container">$e^{st}$</span> for <span class="math-container">$s$</span> at the given pole locations. (The surface of the Laplace Transform is a result of a form of correlation, and will be maximum (poles) where the components correlate the strongest):</p>
<p><span class="math-container">$$e^{st}=e^{(-0.2\pm j0.5)t} = e^{0.2t}e^{j0.5t}, \text{ and } e^{0.2t}e^{-j0.5t}$$</span></p>
<p>Indeed we see the factor <span class="math-container">$e^{0.2t}$</span> directly in the expression for <span class="math-container">$x(t)$</span>, and note from Euler's identity that:</p>
<p><span class="math-container">$$\sin(0.5t) = \frac{e^{j0.5t}-e^{-j0.5t}}{2j}$$</span></p>
<p>Showing how both rotating phasor components given by <span class="math-container">$e^{\pm j 0.5t}$</span> do exist in <span class="math-container">$x(t)$</span>. Do not be thrown off by the <span class="math-container">$j$</span> in the denominator and the subtraction for <span class="math-container">$e^{-j0.5t}$</span>; the surface as depicted above is the magnitude plot, but each sample on that surface is complex values with a magnitude and phase.</p>
<p>The resulting Frequency Response magnitude, as the Fourier Transform of the impulse response, extracted from Laplace by restricting <span class="math-container">$s$</span> to be the <span class="math-container">$j\omega$</span> axis, is shown below. This is plotted on a log log plot with just the positive frequencies as we would typically view the magnitude response for a continuous time filter:</p>
<p><a href="https://i.sstatic.net/7VLUG.png" rel="noreferrer"><img src="https://i.sstatic.net/7VLUG.png" alt="low pass filter" /></a></p>
| 186
|
Laplace transform
|
Is there an analogy of the Fourier-decomposition in the Laplace space to decompose a signal to a few components?
|
https://dsp.stackexchange.com/questions/56644/is-there-an-analogy-of-the-fourier-decomposition-in-the-laplace-space-to-decompo
|
<p>I have a signal from which I know, that it is the sum of a few, exponentially decaying components. I want to find these components.</p>
<p>If it would be a sum of some sinusiod waves, it would be easy to Fourier-transform it, and then find the <a href="https://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker-deltas</a> with some heuristics. It is because the Fourier-transform of a sinusiodal signal is a <span class="math-container">$\delta$</span>.</p>
<p>If the signal would be the superposition of some dampened oscillators, as it is asked in <a href="https://dsp.stackexchange.com/q/49057/17850">this question</a>, the problem would be still easily solvable by a 2d fourier transform. If I understand it well, the trick is that we use the fourier space to decompose the decaying signals. I think it would work only if the oscillators would have a different decay time constant, if they belong to different frequencies.</p>
<p><sub><strong>However, my signal is not from dampened oscillators, it is a simple decaying signal without any oscillation, making this question not a dupe.</strong></sub></p>
<p>My first naive try was to Laplace-transform it, and then do like in the Fourier case. It does not work, because <span class="math-container">$\mathbb L \{ae^{-bt}\}=\frac{a}{s-b}$</span>. First, it is singular, making its digital processing problematic, but the major problem is that I can't see any easy way to find the components in the result.</p>
<p>Maybe Laplace is not usable for that, or it should be done with Laplace, but somehow differently?</p>
| 187
|
|
Laplace transform
|
A question about the meaning of pole in time domain
|
https://dsp.stackexchange.com/questions/40629/a-question-about-the-meaning-of-pole-in-time-domain
|
<p>Lets say I have a transfer function $H(s)$ of a system defined in $s$-domain as:
$$H(s) = \frac{1}{s - (-1-j)}$$</p>
<p>So I conclude that the pole on the $s$-plane is where $s = 1+j$. So far so good.</p>
<ul>
<li><p>Now does that mean if the Laplace transform of the input to the system is $s = 1+j$ the system goes crazy/infinity/oscillate?</p></li>
<li><p>Does that mean if I take the inverse Laplace transform of $1+j$ I can find what input in time domain makes the system unstable?</p></li>
</ul>
|
<p>Let $H(s)$ be a transfer function of the form
$$H(s) = \frac{1}{s-p}$$
where $p$, which is a pole of $H(s)$, can be written as a complex number $a+jb$. Taking the inverse Laplace transform of $H(s)$ gives the corresponding impulse response $h(t)$ (that is, the output of your system when given $\delta(t)$ as input). Noting $\mathcal{L}^{-1}$ the inverse Laplace transform, we have
$$h(t) = \mathcal{L}^{-1}\{H(s)\} = e^{pt} = e^{at}e^{jbt}.$$
Now let's look at what this impulse response looks like. The term $e^{at}$ is a simple exponential which will be either decaying (if $a < 0$) or growing (if $a$ > 0) with time. The term $e^{jbt}$ will be responsible for oscillations in the output of your system (remember that $e^{jbt} = \cos(bt) + j\sin(bt)$). From this, you can infer the stability of your system and understand why we need poles in the left-hand side of the $s$-plane (i.e. we need $a < 0$) for the system to be stable.</p>
<p>Often, the numerator and the denominator of your transfer function have real coefficients, and in this case poles appear in complex conjugate pairs. You could for example have
$$h(t) = e^{at}(e^{jbt} + e^{-jbt}) = 2e^{at}\cos{bt}.$$</p>
<p>I like to keep this picture in mind (taken from <a href="http://web.mit.edu/2.14/www/Handouts/PoleZero.pdf" rel="noreferrer">here</a>) which greatly summarizes this.
<a href="https://i.sstatic.net/ImnRx.png" rel="noreferrer"><img src="https://i.sstatic.net/ImnRx.png" alt="Pole-zero plot and link with the impulse response"></a></p>
<p>For more complex transfer function, partial fraction decomposition can be used to go back to simple cases as presented here.</p>
| 188
|
Laplace transform
|
LTI system response to periodic input
|
https://dsp.stackexchange.com/questions/30712/lti-system-response-to-periodic-input
|
<p>I'm trying to find the zero-state response (ZSR) of an LTI system to a one sided periodic input, like a square wave that is equals to zero for $t < 0$.</p>
<p>I know that I can use the Fourier series of said input function to find the steady-state (SS) response, however I'm having trouble understanding how to use the Laplace transform to obtain the ZSR, which contains the SS component plus a transient one.</p>
<p>My guess is that I need to calculate the Laplace transform of the periodic input, and then solve the system for $s$ to obtain the transform of the output.</p>
|
<p>You can use the Laplace transform, but can also simply use convolution in the time domain. In any case, you will need the system's impulse response $h(t)$. Let the input signal $x(t)$ satisfy $x(t)=0$ for $t<0$, and $x(t+T)=x(t)$ for $t>0$ and $T>0$, as required. Furthermore, let $f(t)$ be the first period of $x(t)$:</p>
<p>$$f(t)=\begin{cases}x(t),&0<t<T\\0,&\text{otherwise}\end{cases}\tag{1}$$</p>
<p>Then</p>
<p>$$x(t)=\sum_{n=0}^{\infty}f(t-nT)=f(t)\star\sum_{n=0}^{\infty}\delta(t-nT)\tag{2}$$</p>
<p>where $\star$ denotes convolution, and $\delta(t)$ is the Dirac delta impulse. If $g(t)$ denotes the convolution $h(t)\star f(t)$, i.e., the system's response to $f(t)$, then the output signal can be written as</p>
<p>$$\begin{align}y(t)&=h(t)\star x(t)\\&=h(t)\star f(t)\star\sum_{n=0}^{\infty}\delta(t-nT)\\&=g(t)\star \sum_{n=0}^{\infty}\delta(t-nT)\\&=\sum_{n=0}^{\infty}g(t-nT)\tag{3}\end{align} $$</p>
<p>According to $(3)$, the output signal can be written as a sum of shifted responses to the finite length signal $f(t)$, which corresponds to the first period of the input signal.</p>
<p>The Laplace transform of the output signal can also be written in terms of the transforms of the input signal and of the function $f(t)$. Note that</p>
<p>$$f(t)=x(t)-x(t-T)\tag{4}$$</p>
<p>The Laplace transform of $(4)$ is</p>
<p>$$F(s)=X(s)(1-e^{-st})\tag{5}$$</p>
<p>If $H(s)$ is the system's transfer function, i.e. the Laplace transform of the impulse response $h(t)$, then the output is given by</p>
<p>$$Y(s)=H(s)X(s)=H(s)\frac{F(s)}{1-e^{-st}}\tag{6}$$</p>
| 189
|
Laplace transform
|
DFT/FFT Transfer function
|
https://dsp.stackexchange.com/questions/27963/dft-fft-transfer-function
|
<p>I want play and record a sine sweep.
When i have both signals the recorded one and the send one i can create a Transferfunction.
That is what i know so far.</p>
<p>$$
H_0 = \frac{OUT}{IN} = \frac{Y}{X}
$$</p>
<p>Where i'm stuck is that when i read about the Transfer function it is all about the $Laplace \, transform$ (and sometimes the $Z-Transform$)
While i use the $FFT/DFT$. I tried some GNU Octave code to tried an Transfer function and i think that looks what i suspected.<br>
But now i curious </p>
<ol>
<li>if the $FFT/DFT$ can be used for/ as a transferfunction and </li>
<li>what is the relation between an $FFT/DFT$, $Laplace \, Transform$ and the $Z-Transform$</li>
</ol>
|
<p>Your question is fairly broad, let me answer it step by step. </p>
<p>First of all $H(s)$ is indeed called the transfer function, and is the laplace transform of the impulse response. It's useful for finding the poles and zeroes, which is what the fourier transform can't do alone. $H(\omega)$ is the frequency response of the impulse response, not really the "transfer function". </p>
<p>If the frequency response is what you want to attain, then you can use the frequency response (fourier transform). It does not exist for unstable signals though.</p>
<p>The relation between Fourier Transform, discrete Fourier transform and Laplace transform/z-transform:</p>
<p>Fourier transform and DFT are in principle the same. One is for discrete signals, and the other for continuous signals. Because of that there are some different properties like frequency domain aliasing. What both of these transforms do is decompose the signal into complex exponentials (easily converted into sines and cosines (or sinusoids with magnitude and phase), which is the original idea).</p>
<p>Laplace and z-transform are again in principle the same, z-transform is the discrete equivalent to Laplace. It's simply the Fourier transform/DFT with the signal multiplied by a varying exponential $e^{\sigma t}$, or in discrete case just $r^t$. Then you have multiple fourier transforms/DFTs relating to each different value of the exponential ($r$ or $\sigma$). Point of interests are the poles and zeroes, IE. the points where the impulse response in question multiplied by an exponential has integral that is exactly infinite or exactly zero.</p>
<p>Note that when $r=1$, or $\sigma = 0$, the Laplace/Z-transform is equal to the Fourier transform. In other words, the Laplace/Z-transform contains the Fourier transform in it.</p>
<p>EDIT: I wrote this fairly quickly so please point out any errors.</p>
| 190
|
Laplace transform
|
Why is z (and not ω) the variable of interest for discrete time systems?
|
https://dsp.stackexchange.com/questions/75831/why-is-z-and-not-%cf%89-the-variable-of-interest-for-discrete-time-systems
|
<p>A continuous time domain system is well described by the Laplace transform. It allows to express any continuous signal x(t) as the integral sum of weighted complex and exponentially growing/decaying sine waves <span class="math-container">$e^{st} = e^{\sigma t} \cdot e^{j\omega t}$</span>:</p>
<p><a href="https://i.sstatic.net/eH2aH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eH2aH.png" alt="enter image description here" /></a></p>
<p>Where X(s) is the Laplace Transform and may be evaluated as:</p>
<p><a href="https://i.sstatic.net/z0JHz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z0JHz.png" alt="enter image description here" /></a></p>
<p>The variable of interest (on which the Laplace Transform depends) is the complex angular frequency <span class="math-container">$s = \sigma + j\omega$</span>. If <span class="math-container">$\sigma= 0$</span>, the Laplace Analysis coincides with the Fourier analysis since <span class="math-container">$s = j\omega$</span></p>
<p>In a discrete time Frequency, the Z transform is usually used:</p>
<p><a href="https://i.sstatic.net/sa5LJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sa5LJ.png" alt="enter image description here" /></a><a href="https://i.sstatic.net/kOJ3h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kOJ3h.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/phUIt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phUIt.png" alt="enter image description here" /></a><a href="https://i.sstatic.net/GRnPE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GRnPE.png" alt="enter image description here" /></a></p>
<p><strong>Questions</strong></p>
<ol>
<li><p>What does the synthesis equation of Z Transform mean? Does it mean that the discrete time domain sequence is expressed as the sum of weighted complex power functions <span class="math-container">$z^{n-1}$</span>?</p>
</li>
<li><p>Is <span class="math-container">$z$</span> the variable of interest? Often I read that if we replace z with <span class="math-container">$e^{j\omega t}$</span>, we get the DFT. Correct, but the DFT is an exponential function of <span class="math-container">$\omega$</span>, and not a power function of <span class="math-container">$z$</span>. Which is (between <span class="math-container">$z$</span> and <span class="math-container">$\omega$</span>) the variable of interest for a discrete time sequence and why?</p>
</li>
<li><p>I think understanding which is the significant variable (between <span class="math-container">$z$</span> and <span class="math-container">$\omega$</span>) is crucial to understand <a href="https://en.wikipedia.org/wiki/Bilinear_transform#Frequency_warping" rel="nofollow noreferrer">Frequency warping</a>, that is the frequency distorsion due to the fact that the real angular frequency axis <span class="math-container">$[-\infty;+\infty]$</span> becames the unit circumference <span class="math-container">$z = e^{j\omega t}$</span> . Well, but this is due only to the fact that <strong>we decide that the variable of interest for a discrete-time sequence is z instead of <span class="math-container">$\omega$</span></strong>. Also Fourier (and Laplace) Transform of continuous signals contains <span class="math-container">$e^{j\omega t}$</span>, but we don't say "We put <span class="math-container">$z = e^{j\omega t}$</span> hence there is distorsion", and the variable of interest is assumed to be <span class="math-container">$\omega$</span>. I've never seen people complaining about this for continuous signals.</p>
</li>
</ol>
<p>It seems that: "Until it's continuous, <span class="math-container">$\omega$</span> is important. When your signal becomes discrete, <span class="math-container">$z$</span> is important". But I do not understand why.</p>
|
<p>The z-transform is the discrete version of the Laplace transform and in both cases z and s are the set of all complex numbers, and as such we map with the transform the time domain function into the domain of complex frequencies; signals that change in rotation only which is the Fourier Transform and in addition to that such signals that can grow and decay in time. This leads to great mathematical simplifications, for example doing this will translate integral-differential equations into simple algebra.</p>
<p>The question is better phrased "why use z instead of s", as what we would refer to as the frequency “axis” is a one dimensional subset of the complex surface for both cases: In the s plane, the frequency axis is the <span class="math-container">$j\omega$</span> axis (the imaginary axis of the s-plane) and in the z-plane, the frequency axis is the unit circle.</p>
<p>The significant convenience of the z-plane is clear when you compare the equations for the Laplace Transform, to the Laplace Transform for a discrete time sequence, and finally with a simple substitution of <span class="math-container">$z= e^{sT}$</span> in the Laplace Transform for the discrete time sequence we arrive at the z-transform. Notably, this translates a transcendental equation to a simple polynomial that we can resolve to finite poles and zeros and takes advantage of the repetition in discrete time resulting in a much simpler equation for further manipulation. One can continue to process everything using Laplace, both continuous time and discrete time waveforms, but why take that punishment when the z-transform can be used instead for the discrete time cases?</p>
<p>Consider the Laplace Transform for a causal continuous time sequence:</p>
<p><span class="math-container">$$X(s) = \int_{t=0}^{\infty} x(t) e^{-st }dt$$</span></p>
<p>The Laplace Transform when applied to a discrete time sequence becomes:</p>
<p><span class="math-container">$$X(s) = \sum_{n=0}^{\infty} x(nT) e^{-snT}$$</span></p>
<p>Note this simple case of applying this formula to solve for the Laplace Transform of a discrete-time two sample moving average, and then solving for the poles and zeros:</p>
<p><a href="https://i.sstatic.net/zfumm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zfumm.png" alt="example" /></a></p>
<p>If we substitute <span class="math-container">$z = e^{sT}$</span> the transform then becomes the "z Transform" as:</p>
<p><span class="math-container">$$X(z) = \sum_{n=0}^{\infty} x(nT) z^{-n}$$</span></p>
<p>And the example above becomes the much simpler polynomial ratio as:</p>
<p><span class="math-container">$$H(z) = \frac{1}{2}+\frac{1}{2}z^{-1} = \frac{z+1}{2z}$$</span></p>
<p>This simplification is not limited to discrete-time systems as we can have similar complexities in continuous time of transfer functions involving fixed delays (with the same challenge of having a transcendental equation versus simpler polynomials of fixed order), in which case we could make the same substitution to proceed with further processing and mathematical manipulation. The point is that in discrete time this always occurs that we are working with systems of consistent unit delays, and the z-transform abstracts that exponential that is always there and avoids us from carrying it through every computation unnecessarily. It is much simpler. This is no different from other mathematical mappings and change of basis with the main objective that in the new space it is much easier to manipulate the equations and find solutions, and if the original space was important, we can transform back to that after all the heavy lifting is done.</p>
<p>Frequency warping only occurs if we choose to map transfer functions from continuous time to discrete time using the Bilinear Transform method and this is only because this method will provide a one to one mapping from the unique continuous frequency domain which extends to <span class="math-container">$\pm \infty$</span> to the unique frequency domain for sampled systems which extends from <span class="math-container">$-f_s/2$</span> to <span class="math-container">$+f_s/2$</span>. Other mappings exist with no warping but will instead have aliasing.</p>
| 191
|
Laplace transform
|
What is the difference between delay and difference properties of z-transform?
|
https://dsp.stackexchange.com/questions/82466/what-is-the-difference-between-delay-and-difference-properties-of-z-transform
|
<p>I'm working on a discrete updating algorithm as follows:</p>
<p><span class="math-container">$x[n+1]=Kx[n]$</span><br />
Here <span class="math-container">$K$</span> is a constant.</p>
<p>The continuous counterpart of this algorithm translates to:</p>
<p><span class="math-container">$\dot{x(t)}=Kx(t)$</span></p>
<p>While the Laplace transform of the continuous one is quite obvious, I'm struggling to find the z-transform of the discrete one. Do I use the delay property? or the first difference property?</p>
|
<p>If by Laplace transform
<span class="math-container">$$\dot{x(t)}=Kx(t)$$</span>
becomes
<span class="math-container">$$sX(s)=KX(s)$$</span>
The by analogy, using the z-transform
<span class="math-container">$$x[n+1]=Kx[n]$$</span>
becomes
<span class="math-container">$$zX(z)=KX(z)$$</span>
This is simply using the time delay property</p>
| 192
|
Laplace transform
|
Not getting the same step response from Laplace transform and it's respective difference equation
|
https://dsp.stackexchange.com/questions/88753/not-getting-the-same-step-response-from-laplace-transform-and-its-respective-di
|
<p>I am trying to simulate a plant on a microcontroller. The transfer function of the plant is</p>
<p><span class="math-container">$$ G_{p} \left( s \right) = \frac{2}{\left( s + 3 \right) \left( s - 1 \right)} \tag{1} \label{1}$$</span></p>
<p>The step response for this function from Octave is</p>
<p><a href="https://i.sstatic.net/z7lul.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z7lul.png" alt="Step response of Gp(s)" /></a></p>
<p>The value goes to <span class="math-container">$200$</span> in <span class="math-container">$6$</span> seconds and this is what I am trying to reproduce through the difference equation I show a little later.</p>
<p>The Z transform of the above with <span class="math-container">$T_{s}$</span> of <span class="math-container">$0.001 ~ s$</span> with zero-order hold is</p>
<p><span class="math-container">$$ G_{p} \left( z \right) = \frac{9.993 \cdot 10^{-7} z + 9.987 \cdot 10^{-7}}{z^{-2} - 1.998 z + 0.998} \tag{2} \label{2} $$</span></p>
<p>The difference equation derived from <span class="math-container">$G_{p} \left( z \right)$</span> is</p>
<p><span class="math-container">$$ y \left( t \right) = 9.993 \cdot 10^{-7} x \left( t - T_{s} \right) + 9.987 \cdot 10^{-7} x \left( t - 2 T_{s} \right) + 1.998 y \left( t - T_{s} \right) - 0.998 y \left( t - 2 T_{s} \right) \tag{3} \label{3} $$</span></p>
<p>Here is the C code I wrote to realise the above difference equation</p>
<pre class="lang-c prettyprint-override"><code>#include<stdio.h>
float xtp0 = 0.0;
float etp0 = 0.0;
float xtp0_minus_Ts = 0.0;
float etp0_minus_Ts = 0.0;
float xtp0_minus_2Ts = 0.0;
float etp0_minus_2Ts = 0.0;
float plant0(float input){
etp0 = input;
xtp0 = (9.993e-7F * etp0_minus_Ts)
+ (9.987e-7F * etp0_minus_2Ts)
+ (1.998F * xtp0_minus_Ts)
- (0.998F * xtp0_minus_2Ts);
//Saving the history
xtp0_minus_2Ts = xtp0_minus_Ts;
etp0_minus_2Ts = etp0_minus_Ts;
xtp0_minus_Ts = xtp0;
etp0_minus_Ts = etp0;
return xtp0;
}
int main(){
float x = 0.0F;
int i;
for(i = 0 ; i < 6000; i++){
if(i == 0){
x = plant0(0.0F);
}
else{
x = plant0(1.0F);
}
}
printf("%f\n",x);
}
</code></pre>
<p>I am trying to run the loop <span class="math-container">$6000$</span> times as that would amount to <span class="math-container">$6$</span> seconds since the sampling period is <span class="math-container">$0.001$</span> seconds, and passing the value of <span class="math-container">$1$</span> each time I call the <code>plant0</code> function (thus passing <span class="math-container">$1$</span>, <span class="math-container">$6000$</span> times to <code>plant0</code>). My understanding is that passing a value of <span class="math-container">$1$</span> is equivalent to getting the step response of this function. I am expecting the value to be <span class="math-container">$200$</span> as observed in the step response graph. However, I get a value of <span class="math-container">$5.321684$</span> from the program. Running the same program on the microcontroller is also giving the same output of <span class="math-container">$5.321684$</span>.</p>
<p>My intention as I stated previously, is to make the difference equation respond in the same way as the step response seen in the plot. Where am I going wrong here?</p>
|
<p>The difference equation as written in your question is wrong, but I see that you implemented the correct version, also using delayed versions of the output to compute the current output.</p>
<p>The problem is that you used truncated values for the coefficients. You need to represent the coefficients with high accuracy in order for the system to behave as desired. You can't just round to three decimals.</p>
<p>You could actually compute the correct coefficients during an initialization phase in your code, but for a first test you might want to try these more accurate values (<code>b</code> are the numerator coefficients, and <code>a</code> are the denominator coefficients):</p>
<p><code>b = [9.993339163960613e-07 9.986679158080491e-07]</code></p>
<p><code>a = [1 -1.998004995670081e+00 9.980019986673331e-01]</code></p>
<p>With these coefficient values and with the correct difference equation, the step response of the discrete-time system is indeed just a sampled version of the continuous-time step response, as shown in the figure below:</p>
<p><a href="https://i.sstatic.net/jAhiH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jAhiH.png" alt="enter image description here" /></a></p>
| 193
|
Laplace transform
|
Why is the digital frequency response taken on the unit circle, while the analog is taken along the imaginary axis?
|
https://dsp.stackexchange.com/questions/95816/why-is-the-digital-frequency-response-taken-on-the-unit-circle-while-the-analog
|
<p>For digital signals, the fourier transform is taken along the unit circle of the Z-transform.<br />
The equivalent to the Z-transform in continuous signals is the Laplace transform, but in that case the fourier transform is taken along the imaginary axis.</p>
<p>Why the difference? Why don't we take the z-transform along the imaginary axis in the DTFT?<br />
An usual intuition to learn is that the real axis in Laplace are exponentials (<span class="math-container">$e^x$</span>), while the imaginary are sinusoids <span class="math-container">$e^{iy} = \cos(y) + i\sin(y)$</span>. Wouldnt that relationship hold in Z-transform too?</p>
|
<p>One way to see this is to consider the Laplace transform of a sampled signal:</p>
<p><span class="math-container">$$x_d(t)=\sum_{n=0}^{\infty}x(nT)\delta(t-nT)\tag{1}$$</span></p>
<p>where I've assumed that <span class="math-container">$x(t)$</span> starts at <span class="math-container">$t=0$</span>. <span class="math-container">$T$</span> is the sampling period, and <span class="math-container">$\delta(t)$</span> is the Dirac delta impulse.</p>
<p>Taking the Laplace transform of <span class="math-container">$(1)$</span> gives</p>
<p><span class="math-container">\begin{align*}
X_d(s) &= \int_0^{\infty}\sum_{n=0}^{\infty}x(nT)\delta(t-nT)e^{-st}dt \\
&= \sum_{n=0}^{\infty}x(nT)\int_{0}^{\infty}\delta(t-nT)e^{-st}dt \\
&= \sum_{n=0}^{\infty}x(nT)e^{-snT}\tag{2}
\end{align*}</span></p>
<p>Evaluating <span class="math-container">$(2)$</span> on the imaginary axis <span class="math-container">$s=j\omega$</span> results in the DTFT of the sampled signal:</p>
<p><span class="math-container">$$X_d(j\omega)=\sum_{n=0}^{\infty}x(nT)e^{-j\omega nT}\tag{3}$$</span></p>
<p>In <span class="math-container">$(3)$</span> it is obvious that by varying <span class="math-container">$\omega$</span> we move along the unit circle. <span class="math-container">$X_d(j\omega)$</span> is periodic with period <span class="math-container">$2\pi/T$</span>, and for this reason it is frequently written as a function of <span class="math-container">$e^{j\omega}$</span>.</p>
<p>Setting <span class="math-container">$e^{sT}=z$</span> in <span class="math-container">$(2)$</span> gives us the <span class="math-container">$\mathcal{Z}$</span>-transform of the sequence <span class="math-container">$x(nT)$</span>.</p>
| 194
|
Laplace transform
|
Creating a digital filter, from Laplace to $\mathcal Z$-transform (zero order hold) to code?
|
https://dsp.stackexchange.com/questions/18329/creating-a-digital-filter-from-laplace-to-mathcal-z-transform-zero-order-ho
|
<p>I'm trying to create a digital filter in code(C) but any language is fine. Now I've got an analogue filter that I have represented by an equation in the Laplace domain and I want to try and implement it digitally. </p>
<p>So my filter has this form in the Laplace domain:
$$\frac{as+b}{cs^2+ds}$$</p>
<p>I then use MATLAB's <code>c2d</code> command which uses the zero order hold transformation (I have a really poor grasp on this, so this might be wrong) and it gives me this formula:</p>
<p>$$\frac{\left(5\cdot 10^5\right)z-67}{z^2-z}$$</p>
<p>I tried following an <a href="http://liquidsdr.org/blog/pll-howto/" rel="nofollow">example</a> that I found that used the Tustin's method, though when I use the <code>c2d</code> function in MATLAB with Tustin it gives me an error.</p>
<p>My attempt has been</p>
<p>$$\frac{hz-i}{jz^2-kz}$$</p>
<p>$b_0=-i, b_1=h, b_2=0, a_0=0, a_1=-k, a_2=j$</p>
<p>Then from this I've tried (which is wrong)
\begin{align}
\text{output}&=z_0 b_0+z_1b_1+z_2b_2\\
z_2&=z_1\\
z_1&=z_0\\
z_0&=\text{input}-a_0z_0-a_1z_1-a_2z_2
\end{align}</p>
|
<p>The example I looked at used a tustin or bilinear conversion not a zero order hold(the default for matlabs "c2d" command). So this is more an answer to what i wanted to do rather than the question that i asked above.</p>
<p>I solved the following (converting the s domain function into code) by taking the s domain function.
$$\frac{as+b}{cs^2+ds}$$</p>
<p>and putting this into matlab (matlab command "g=tf([a b],[c d 0])"). Then performed the bilenear conversion with the matlab command "c2d(g,Ts,'tustin')" where g was my transfer funtion and Ts my sampling rate. This produced the output</p>
<p>$$\frac{ez^2+fz+g}{iz^2+jz+k}$$</p>
<p>The a and b coeficients can then be taken from this equation such that(if $i!=1$ the equation needs to be multiplied through by the inverse of "i"):
$b0=e$ $b1=f$ $b2=g$
$a0=i$ $a1=j$ $a2=k$</p>
<p>this can then be converted to code by setting the initial states for simplicity let $$z0=z1=z2=0$$</p>
<p>then set up a loop that repeats the following algorithm</p>
<p>$$output=z0*b0+z1*b1+z2*b2$$
$$z2=z1$$
$$z1=z0$$
$$z0=input-a1*z1-a2*z2$$</p>
<p>For anyone else that got lost like me, this is known as an IIR filter and googling IIR filter design helped sooo much. </p>
| 195
|
Laplace transform
|
Validity of applying Heaviside function for signal processing applications
|
https://dsp.stackexchange.com/questions/66998/validity-of-applying-heaviside-function-for-signal-processing-applications
|
<p>I wasn't sure if this question was more suitable for math.stackexchange, but I suspect it's more-so a signal processing question (albeit, a theoretical one) than a mathematical one.</p>
<p>I am currently studying the textbook <em>An Introduction to Laplace Transforms and Fourier Series</em>, second edition, by Phil Dyke. Chapter <strong>2.1 Real Functions</strong> describes <em>Heaviside's unit step function</em> as follows:</p>
<blockquote>
<p>Sometimes, a function <span class="math-container">$F(t)$</span> represents a natural or engineering process that has no obvious starting value. Statisticians call this a <em>time series</em>. Although we shall not be considering <span class="math-container">$F(t)$</span> as stochastic, it is nevertheless worth introducing a way of "switching on" a function. Let us start by finding the Laplace transform of a step function the name of which pays homage to the pioneering electrical engineer Oliver Heaviside (1850 - 1925). The formal definition runs as follows.</p>
<p><strong>Definition 2.1</strong> <em>Heaviside's unit step function, or simply the unit step function, is defined as</em></p>
<p><span class="math-container">$$H(t) = \begin{cases} 0 & t < 0, \\ 1 & t \ge 0. \end{cases}$$</span></p>
<p>Since <span class="math-container">$H(t)$</span> is precisely the same as <span class="math-container">$1$</span> for <span class="math-container">$t > 0$</span>, the Laplace transform of <span class="math-container">$H(t)$</span> must be the same as the Laplace transform of <span class="math-container">$1$</span>, i.e., <span class="math-container">$1/s$</span>. The switching on of an arbitrary function is achieved simply by multiplying it by the standard function <span class="math-container">$H(t)$</span>, so if <span class="math-container">$F(t)$</span> is given by the function shown in Fig. 2.1 and we multiply this function by the Heaviside unit step function <span class="math-container">$H(t)$</span> to obtain <span class="math-container">$H(t)F(t)$</span>, Fig 2.2 results. Sometimes it is necessary to define what is called the <em>two sided</em> Laplace transform</p>
<p><span class="math-container">$$\int_{-\infty}^\infty e^{-st} F(t) \ dt,$$</span></p>
<p>which makes a great deal of mathematical sense. However, the additional problems that arise by allowing negative values of <span class="math-container">$t$</span> are severe and limit the use of the two sided Laplace transform. For this reason, the two sided transform will not be pursued here.</p>
<p><a href="https://i.sstatic.net/SDZeb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SDZeb.png" alt="enter image description here" /></a></p>
</blockquote>
<p>What I'm having difficulty understanding is how this procedure is valid from a signal processing perspective. Mathematically, we can see that by applying the unit step function, all values of the function for <span class="math-container">$t < 0$</span> become <span class="math-container">$0$</span>. This is valid from a mathematical perspective, but it seems to eliminate all of the information associated with values <span class="math-container">$t < 0$</span>, which leads me to wonder, since the unit step function is used for <em>signal processing</em> applications, how this is valid from a signal processing perspective? Couldn't values of the function for <span class="math-container">$t < 0$</span> contain valuable information, and aren't we deleting this information when applying the Heaviside function to this function?</p>
<p>I would greatly appreciate it if people would please take the time to explain this.</p>
| 196
|
|
Laplace transform
|
Fourier transform of unit step
|
https://dsp.stackexchange.com/questions/67974/fourier-transform-of-unit-step
|
<p>I was reading pdf by caltech and in one of its section, Fourier transform of Unit step signal is calculated but I am confused, how this can be possible if region of convergence for Laplace transform (<span class="math-container">$1/s$</span>) of unit step signal does not contain imaginary axis?</p>
<p>And if above case is possible then if it given that impulse response of a system is Unit step then frequency response of it should also exist and equal to <span class="math-container">$H(ω)= πδ(ω) + 1/jω$</span> and then can we
calculate Fourier transform of output by computing <span class="math-container">$H(ω)X(ω)$</span> where <span class="math-container">$X(ω)$</span> is Fourier transform of input? </p>
|
<p>The Fourier transform can be generalized for functions that are not absolutely integrable. We can define a Fourier transform for functions with a constant envelope (e.g., sine, cosine, complex exponential), and even for functions with polynomial growth (but not with exponential growth). In these cases we must be prepared to deal with generalized functions in the expression of the Fourier transform, such as the Dirac delta impulse or its derivatives. This is also true for the Fourier transform of the step function.</p>
<p>The multiplication property of the Fourier transform remains true, albeit with certain restrictions, so we can generally compute the Fourier transform of the convolution of two functions by multiplying their Fourier transforms, provided that the convolution exists.</p>
| 197
|
Laplace transform
|
Visualising a Z-transformed Transfer Function?
|
https://dsp.stackexchange.com/questions/22494/visualising-a-z-transformed-transfer-function
|
<p>For designing any analog filter and various other outputs of filter we use <strong>laplace transform</strong>,I can visualise a laplace transform like for ex.<br>
<code>s[X(s)]</code> can be implemented as differentiator fetched with signal <code>x(t)</code>while implementing differentiators we generally use
capacitors or by using OPAMP similarly,
<code>[X(s)]/s</code> can be implemented as integrators using OPAMP.</p>
<p>But I cannot visualize the z transform in designing of digital filters,
while the implementing the digital filters we often map the s plane to z plane i cannot understand the significance</p>
<p>There are various method of implementing but in end we compare the s plane with z-plane ,
and deduce a equation where the value of z in found in place of s,</p>
<p><strong>Why do we need to map the S plane to z plane?</strong></p>
|
<p>in continuous-time functions, like $x(t)$, the operation of the derivative makes sense, it is well defined. the three components to a continuous-time LTI filter are adders (two or more signals being added), scalers (a signal is simply multiplied by a constant), and integrators (the $s^{-1}$ operators). the first two components do not discriminate with regard to frequency, so with just those two components, you cannot make a <em>"filter"</em> that filters out some frequencies more so than others. but the integrator <strong>does</strong> act differently on sinusoids of different frequencies. higher frequencies come out of the integrator reduced in amplitude more than lower frequencies.</p>
<p>in discrete-time functions (or "sequences"), like $x[n]$, the derivative operator does <strong>not</strong> makes sense, it is not defined. the three components to a discrete-time LTI filter are adders (two or more signals being added), scalers (a signal is simply multiplied by a constant), and delay elements (the $z^{-1}$ operators). the first two components do not discriminate with regard to frequency, so with just those two components, you cannot make a <em>"filter"</em> that filters out some frequencies more so than others. but the delay element <strong>does</strong> act differently on sinusoids of different frequencies. higher frequencies come out of the delay element shifted more in phase than lower frequencies.</p>
<p>analog filters act on physical quantities in time. digital filters act on numbers. these numbers are samples of a continuous-time function ($x[n]=x(nT)$) and make up a sequence. there is no way to do derivatives or integrals directly. but we <strong>can</strong> delay any of these sequences by use of computer memory. that's why we talk of $H(z)$ more so than $H(s)$ when designing and implementing digital filters.</p>
| 198
|
Laplace transform
|
What is the significance of Z-transform?
|
https://dsp.stackexchange.com/questions/22556/what-is-the-significance-of-z-transform
|
<p>As we have in Laplace transform that the roots decide the stability of the system i.e. if the roots are complex and lie in the left side of the plane you get a sinusoidal response with decreasing amplitude </p>
<p>similarly is there any significance of the roots , zeros and ROC of the z-transform and the stability criteria . All i read in books is how to find the ROC and the properties of z transform like linearity ,time reversal,time shifting . It has nowhere mentioned why we are even using z-transform</p>
<p>my apologies if this question is too basic or if it shouldn't belong here.</p>
|
<p>First of all, I think you're reading the wrong books. Almost any basic text on DSP has a chapter on the $\mathcal{Z}$-transform and its significance to describe linear time-invariant (LTI) discrete-time systems. If you're looking for good (and free) books, take a look at <a href="https://dsp.stackexchange.com/questions/18564/a-dsp-simple-book-reference/18568#18568">this answer</a>.</p>
<p>I will not repeat all the details you can find in those books (and in many other places), but let me just point out a few very basic things to get you started. Each (single) pole $p$ of the transfer function $H(z)$ of a causal LTI discrete-time system contributes a term</p>
<p>$$c\cdot p^nu[n]\tag{1}$$</p>
<p>to the system's impulse response, where $c$ is some constant, $p$ is the (possibly complex) pole, and $u[n]$ is the discrete-time unit step function. From (1) it is clear that this contribution only decays with time if $|p|<1$. So for a causal system to be stable we require that all the poles of the transfer function are <em>inside</em> the unit circle of the complex plane, i.e. they have magnitudes smaller than $1$. So if you're looking for analogies with the Laplace transform, the inside of the unit circle corresponds to the left half plane of the complex variable $s$. Furthermore, the unit circle of the $z$-plane corresponds to the $j\omega$-axis. Knowing these two things, it becomes very easy to carry over everything you know about transfer functions of continuous-time systems (Laplace transform) to the discrete-time domain ($\mathcal{Z}$-transform).</p>
| 199
|
wavelet transform
|
Continuous Wavelet Transform vs Discrete Wavelet Transform
|
https://dsp.stackexchange.com/questions/76624/continuous-wavelet-transform-vs-discrete-wavelet-transform
|
<p>The discrete wavelet transform is applied in many areas, such as signal compression, since it is easy to compute. I notice that, However, the continuous wavelet transform (CWT) is also applied to different subjects. In my opinion, the CWT is redundant and hence difficult to compute. So what are the advantages of the continuous wavelet transform?</p>
|
<p>On the one hand with the DWT, only a restricted choice of wavelets is available: those that implement 2-band perfect reconstruction (Daubechies, Symmlets, Coiflets, Spline). They are non-redundant, and often orthogonal or close to orthogonal, which simplifies some computations, inversion or statistical analysis, for instance. Yet, they are not quite shift-invariant. In other words, if you shift your signal by an integer number of samples, the coefficients are not "the same" as for the original one with a shift. You can read more at <a href="https://datascience.stackexchange.com/a/16084/12527">What is the difference between “equivariant to translation” and “invariant to translation”</a>.</p>
<p>On the other hand, CWT theoretically allows a huge quantity of admissible wavelets. In practice, they are sub-sampled and the construction can be exact, but very close to (unnoticeable with a little noise in the data). And shift-invariance can be almost satisfied.</p>
<p>So when there is a specific wavelet shape that you want to use, because it is physically related to your system, or you want them to have precise properties to analyze your data finely (precise timing, local regularity, matched filtering), discrete approximations of CWT is often more convenient in a first instance. Notably, for phase analysis, it is quite common to use complex CWT, which people rarely do with the DWT.</p>
<p>Yet, when you have achieve your goals with the CWT, and efficiency still matters, you can search for a DWT domain processing that yields similar results.</p>
<p>For instance in <a href="https://arxiv.org/abs/1108.4674" rel="noreferrer">Adaptive multiple subtraction with wavelet-based complex unary Wiener filters</a>, 2012, we wanted to perform adaptive pattern subtraction with 1D seismic data. We first tried to combines DWT and FIR adaptive filters, but we were not satisfied. Then we moved to complex CWT, and were able to compute very efficiently the matched filters in the complex domain (oddly, with one-tap or unary filters on sliding frames). After that, we studied how far we could reduce the CWT redundancy and preserve the quality.
Finally, we tried to go to 2D, where the redundancy is way more problematic. So we use discrete wavelet frames, and FIR filters, but we had a lot of difficulties in obtaining much better results than with the 1D CWT version (<a href="https://arxiv.org/abs/1405.1081" rel="noreferrer">A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal</a>, 2014). I still hope one can succeed with critically sampled DWT...</p>
| 200
|
wavelet transform
|
Synchrosqueezing Wavelet Transform explanation?
|
https://dsp.stackexchange.com/questions/71398/synchrosqueezing-wavelet-transform-explanation
|
<p>How does Synchrosqueezing Wavelet Transform work, intuitively? What does the "synchrosqueezed" part do, and how is it different from simply the (continuous) Wavelet Transform?</p>
|
<p>Synchrosqueezing is a powerful <em>reassignment</em> method. To grasp its mechanisms, we dissect the (continuous) Wavelet Transform, and how its pitfalls can be remedied. Physical and statistical interpretations are provided.</p>
<p>If unfamiliar with CWT, I recommend <a href="https://ccrma.stanford.edu/%7Eunjung/mylec/WTpart1.html" rel="noreferrer">this</a> tutorial. SSWT is implemented in MATLAB as <a href="https://www.mathworks.com/help/wavelet/ref/wsst.html" rel="noreferrer">wsst</a>, and in Python, <a href="https://github.com/OverLordGoldDragon/ssqueezepy" rel="noreferrer">ssqueezepy</a>. (-- All answer <a href="https://github.com/OverLordGoldDragon/ssqueezepy/blob/master/examples/se_ans0.py" rel="noreferrer">code</a>)</p>
<hr>
<p>Begin with CWT of a pure tone:</p>
<p><a href="https://i.sstatic.net/rCNLJ.png" rel="noreferrer"><img src="https://i.sstatic.net/rCNLJ.png" alt="enter image description here" /></a></p>
<p>A straight line in the time-frequency (rather, time-scale) plane, for our fixed-frequency sinusoid over all time - fair. ... except <em>is it</em> a straight line? No, it's a <em>band</em> of lines, seemingly centered about some maximum, likely the "true scale". Zooming,</p>
<img src="https://i.sstatic.net/mpcIW.png" height="250">
<p>makes this more pronounced. Let's plot rows within this zoomed band, one by one:</p>
<img src="https://i.imgur.com/APFoBkA.gif" width="420">
<p>and all superimposed, each for samples 0 to 127 (horizontal zoom):</p>
<img src="https://i.sstatic.net/HKxDA.png" width="420">
<p>Notice anything interesting? They all have the <strong>same frequency</strong>. It isn't particular to this sinusoid, but is how CWT works in correlating wavelets with signals.</p>
<p>It appears much of information "repeats"; there is <em>redundancy</em>. Can we take advantage of this? Well, if we just <em>assume</em> that all these adjacent bands actually stem from one and the same band, then we can <em>merge</em> them into one - and this, in a nutshell, is what synchrosqueezing does. Naturally it's more nuanced, but the underlying idea is that we <em>sum</em> components of the same instantaneous frequency to obtain a sharper, focused time-frequency representation.</p>
<p>Here's that same CWT, synchrosqueezed:</p>
<img src="https://i.sstatic.net/1g7GW.png" height="250">
<p>Now <em>that</em> is a straight line.</p>
<hr>
<p><strong>How's it work, exactly?</strong></p>
<p>We have an idea, but how exactly is this mathematically formulated? Motivated by speaker identification and Empirical Mode Decomposition, SSWT builds upon the <em>modulation model</em>:</p>
<p><span class="math-container">$$
f(t) = \sum_{k=1}^{K} A_k(t) \cos(\phi_k (t)), \tag{1}
$$</span></p>
<p>where <span class="math-container">$A_k(t)$</span> is the instantaneous amplitude and</p>
<p><span class="math-container">$$
\omega_k(t) = \frac{d}{dt}(\phi_k(t)) \tag{2}
$$</span></p>
<p>the instantaneous frequency of <em>component</em> <span class="math-container">$k$</span>, where we seek to find <span class="math-container">$K$</span> such "components" that sum to the original signal. More on this below, "MM vs FT".</p>
<p>At this stage, we only have the CWT, <span class="math-container">$W_f(a, b)$</span> (a=scale, b=timeshift); how do we extract <span class="math-container">$\omega$</span> from it? Revisit the zoomed pure tone plots; again, the <em><span class="math-container">$b$</span>-dependence</em> preserves the original harmonic oscillations at the correct frequency, <em>regardless of <span class="math-container">$a$</span></em>. This suggests we compute, for any <span class="math-container">$(a, b)$</span>, the instantaneous frequency via</p>
<p><span class="math-container">$$
\omega(a, b) = -j[W_f(a, b)]^{-1} \frac{\partial}{\partial b}W_f(a, b), \tag{3}
$$</span></p>
<p>where we've taken the <em>log-derivative</em>, <span class="math-container">$f' / f$</span>. To see why, we <a href="https://i.sstatic.net/115Md.png" rel="noreferrer">can show</a> that CWT of <span class="math-container">$f(t)=A_0 \cos (\omega_0 t)$</span> is:</p>
<p><span class="math-container">$$
W_f(a, b) = \frac{A_0}{4 \pi} \sqrt{a} \overline{\hat{\psi}(a \omega_0)} e^{j b \omega_0} \tag{4}
$$</span></p>
<p>and thus partial-diffing w.r.t. <span class="math-container">$b$</span>, we <em>extract</em> <span class="math-container">$\omega_0$</span>, and the rest in (3) gets divided out. ("But what if <span class="math-container">$f$</span> is less nice?" - see caveats).</p>
<p>Finally, equipped with <span class="math-container">$\omega (a, b)$</span>, we transfer the information from the <span class="math-container">$(a, b)$</span>-plane to a <span class="math-container">$(\omega, b)$</span> plane:</p>
<p><span class="math-container">$$
\boxed{ S_f (\omega_l, b) = \sum_{a_k\text{ such that } |\omega(a_k, b) - w_l| \leq \Delta \omega / 2} W_f (a_k, b) a_k^{-3/2}} \tag{5}
$$</span></p>
<p>with <span class="math-container">$w_l$</span> spaced apart by <span class="math-container">$\Delta w$</span>, and <span class="math-container">$a^{-3/2}$</span> for normalization (see "Notes").</p>
<p>And that's about it. Essentially, take our CWT, and <em>reassign</em> it, intelligently.</p>
<hr>
<p><strong>So where are the "components"?</strong> -- Extracted from high-valued (ridge) curves in the SSWT plane; in the pure tone case, it's one line, and <span class="math-container">$K=1$</span>. <a href="https://github.com/OverLordGoldDragon/ssqueezepy/tree/master/examples" rel="noreferrer">More examples</a>; we select a part of the plane and <em>invert over it</em> as many times as needed.</p>
<hr>
<p><strong>Modulation Model vs Fourier Transform</strong>:</p>
<p>What's <span class="math-container">$(1)$</span> all about, and why not just use FT? Consider a pendulum oscillating with fixed period and constant damping, and its FT:</p>
<p><span class="math-container">$$
s(t) = e^{-t} \cos (25t) u(t)\ \Leftrightarrow\ S(\omega) = \frac{1 + j\omega}{(1 + j\omega)^2 + 625}
$$</span></p>
<p><a href="https://i.sstatic.net/cAsbn.png" rel="noreferrer"><img src="https://i.sstatic.net/cAsbn.png" alt="enter image description here" /></a></p>
<p>What does the Fourier Transform tell us? <em>Infinitely many frequencies</em>, but at least peaking at the pendulum's actual frequency. Is this a sensible physical description? Hardly (only in certain indirect senses); the problem is, FT uses <em>fixed-amplitude complex sinusoid frequencies</em> as its building blocks (basis functions, or "bases"), whereas here we have a <em>variable</em> amplitude that cannot be easily represented by constant frequencies, so FT is forced to "compensate" with all these additional "frequencies".</p>
<p>This isn't limited to amplitude modulation; the less sinusoidal or non-periodic the function, the less meaningful its FT spectrum (though not always). Simple example: 1Hz triangle wave, multiple FT frequencies. Frequency-modulation suffers likewise; more intuition <a href="https://dsp.stackexchange.com/a/70395/50076">here</a>.</p>
<p>These are the pitfalls the Modulation Model aims to address - by <em>decoupling</em> amplitude and frequency over time from the global signal, rather than assuming the same (and constant!) amplitude and frequency for all time.</p>
<p>Meanwhile, SSWT - perfection:</p>
<p><a href="https://i.sstatic.net/O0XwS.png" rel="noreferrer"><img src="https://i.sstatic.net/O0XwS.png" alt="enter image description here" /></a></p>
<hr>
<p><strong>Is synchrosqueezing magic?</strong></p>
<p>We seem to gain a lot by ssqueezing - an apparently perfect frequency resolution, violating Heisenberg's uncertainty, and partial noise cancellation ("Notes"). How can this be?</p>
<p>A <strong><em>prior</em></strong>. We <em>assume</em> <span class="math-container">$f(t)$</span> is well-captured by the <span class="math-container">$A_k(t) \cos(\phi_k (t))$</span> components, e.g. based on our knowledge of the underlying physical process. In fact we assume much more than that, shown bit later, but the idea is, this works well on a <em>subset</em> of all possible signals:</p>
<p><a href="https://i.sstatic.net/5GOxl.png" rel="noreferrer"><img src="https://i.sstatic.net/5GOxl.png" alt="enter image description here" /></a></p>
<p>Indeed, there are many ways synchrosqueezing can go awry, and the more the input obeys SSWT's assumptions (which aren't too restrictive, and many signals naturally comply), the better the results.</p>
<hr>
<p><strong>What are SSWT's assumptions?</strong> (when will it fail?)</p>
<p>This is a topic of its own (which I may post on later), but briefly, the formulation's as follows. Firstly note that we must somehow restrict what <span class="math-container">$A(t)$</span> and <span class="math-container">$\psi(t)$</span> can be, else, for example, <span class="math-container">$A(t)$</span> can simply cancel out the cosine and become any other function. More precisely, the components are to be such that:</p>
<p><a href="https://i.sstatic.net/5Svsg.png" rel="noreferrer"><img src="https://i.sstatic.net/5Svsg.png" alt="enter image description here" /></a></p>
<p>More info in ref 2.</p>
<hr>
<p><strong>How would it be implemented?</strong> There's now <a href="https://github.com/OverLordGoldDragon/ssqueezepy/blob/0.5.0rc2/ssqueezepy/ssqueezing.py#L57" rel="noreferrer">Python code</a>, clean & commented. Regardless, worth noting:</p>
<ol>
<li>For very small CWT coefficients, phase is unstable (just like for DFT), which we work around by <em>zeroing</em> all such coefficients below a given threshold.</li>
<li>For any frequency row/bin <span class="math-container">$w_l$</span> in SSWT plane, we reassign from <span class="math-container">$W_f(a, b)$</span> based on what's <em>closest to</em> <span class="math-container">$w_l$</span> according to <span class="math-container">$\omega (a, b)$</span>, and for log-scaled CWT we use <em>log-distance</em>.</li>
</ol>
<hr>
<p><strong>Summary</strong>:</p>
<p>SSWT is a time-frequency analysis tool. CWT extracts the time-frequency information, and synchrosqueezing intelligently reassigns it - providing a sparser, sharper, noise-robust, and partly denoised representation. The success of synchrosqueezing is based in and explanied by its prior; the more the input obeys assumptions, the better the results.</p>
<hr>
<p><strong>Notes & caveats</strong>:</p>
<ul>
<li><em>What if <span class="math-container">$f$</span> isn't nice in <span class="math-container">$\omega(a, b)$</span> example?</em> <a href="https://dsp.stackexchange.com/q/70998/50076">Valid question</a>; in practice, the more the function satisfies aforementioned assumptions, the less of a problem this is, as authors demonstrate through various lemmas.</li>
<li>In the SSWT of damped pendulum, I cheated a little by extending signal's time to <span class="math-container">$(-2, 6)$</span>; this is only to prevent boundary effects, which is a CWT phenomenon that can be remedied; here's directly <a href="https://i.sstatic.net/HPTXc.png" rel="noreferrer">0 to 6</a>.</li>
<li><em>Partial noise cancellation?</em> Indeed; see pg 536 of ref 1.</li>
<li><em>What's the <span class="math-container">$a^{-3/2}$</span> in <span class="math-container">$(5)$</span>?</em> Synchrosqueezing effectively <em>inverts</em> <span class="math-container">$W_f$</span> onto the reassigned plane, using <a href="https://dsp.stackexchange.com/q/71273/50076">one-integral iCWT</a>.</li>
<li><strong><em>"Fourier bad?"</em></strong> My earlier comparison is prone to criticism. To be clear, FT is the most solid and general-purpose basis that we have for a signals framework. But it's not an <em>all-purpose</em>-best; depending on context, other constructions are more meaningful <em>and</em> more useful.</li>
</ul>
<hr>
<p><strong>Where to learn more?</strong></p>
<p>The refernced papers are a good source, so are MATLAB's <code>wsst</code> and <code>cwt</code> docs and <code>ssqueezepy</code>'s source code. I may also write further Q&A's, which you can be notified of by subbing <a href="https://github.com/OverLordGoldDragon/ssqueezepy/issues/7" rel="noreferrer">this thread</a>.</p>
<hr>
<p><strong>References</strong>:</p>
<ol>
<li><a href="https://services.math.duke.edu/%7Eingrid/publications/DM96.pdf" rel="noreferrer">A Nonlinear Squeezing of the CWT Based on Auditory Nerve Models</a> - I. Daubechies, S. Maes. Excellent origin paper with succinct intuitions.</li>
<li><a href="https://arxiv.org/abs/0912.2437" rel="noreferrer">Synchrosqueezed Wavelet Transforms: a tool for Empirical Mode Decomposition</a> - I. Daubechies, J. Lu, H.T. Wu. Good followup paper with examples.</li>
<li><a href="https://arxiv.org/abs/1105.0010" rel="noreferrer">The Synchrosqueezing algorithm for time-varying spectral analysis: robustness properties and new paleoclimate applications</a> - G. Thakur, E. Brevdo, et al. Further exploration of robustness properties and implementation details (including threshold-setting).</li>
</ol>
| 201
|
wavelet transform
|
Implementing Continuous Wavelet Transform
|
https://dsp.stackexchange.com/questions/37528/implementing-continuous-wavelet-transform
|
<p>I need to implement the discretized continuous wavelet transform from scratch. Could someone please point me to useful papers and references available online for this?</p>
|
<p>In 1D, some of the standard references are:</p>
<ul>
<li><a href="http://www.sciencedirect.com/science/article/pii/S0165168402001408" rel="nofollow noreferrer">Continuous wavelet transform with arbitrary scales and $O({N})$ complexity</a>, A. Muñoz and R. Ertl\'e and M. Unser, Signal Processing, 2002</li>
<li><a href="http://dx.doi.org/10.1109/ACSSC.1997.679101" rel="nofollow noreferrer">A fast approximation to the continuous wavelet transform with applications</a>, Berkner, K. and Wells, R. O., Jr., 1997, Proc. Asilomar</li>
<li><a href="http://dx.doi.org/10.1137/S0036139995288010" rel="nofollow noreferrer">Fast Quasi-Continuous Wavelet Algorithms for Analysis and Synthesis of One-Dimensional Signals</a>, Maes, S. H., 1997, SIAM J. Appl. Math.</li>
<li><a href="http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1021580" rel="nofollow noreferrer">Comparison of algorithms for the fast computation of the continuous wavelet transform</a>, Vrhel, M. J. and Lee, C. and Unser, M. A., 1996, Proc. SPIE</li>
<li><a href="https://doi.org/10.1109/18.119724" rel="nofollow noreferrer">Fast algorithms for discrete and continuous wavelet transforms</a>, Rioul, O. and Duhamel, P., 1992, IEEE Trans. Inform. Theory</li>
</ul>
<p>For a GPU implementation:</p>
<ul>
<li><a href="http://www.fim.uni-passau.de/fileadmin/files/lehrstuhl/sauer/geyer/BA_MA_Arbeiten/BA-MaltanRichard-201404.pdf" rel="nofollow noreferrer">Effiziente Berechnung der FWT auf Grafikkarten</a>, Richard Maltan, Bachelorarbeit, 2014</li>
</ul>
| 202
|
wavelet transform
|
Daubechies wavelet transform
|
https://dsp.stackexchange.com/questions/28629/daubechies-wavelet-transform
|
<p>i have N samples obtained by sampling a signal with lot of frequency contents. How will i apply daubechies wavelet transform to obtain the frequency and its location? i need to write a program which will process the signal and gives the frequency and location as the result.</p>
|
<p>Looks like you need a general explanation of the discrete wavelet transform (DWT). DWT breaks a signal down into subbands distributed evenly in a logarithmic frequency scale, each subband sampled at a rate proportional to the frequencies in that band. The traditional Fourier transformation has no time domain resolution at all, or when done using many short windows on a longer data, equal resolution at all frequencies. The distribution of samples in the time and frequency domain by DWT is of form:</p>
<pre><code>log f
|XXXXXXXXXXXXXXXX X = a sample
|X X X X X X X X f = frequency
|X X X X t = time
|X X
|X
|X
----------------t
</code></pre>
<p>Single subband decomposition and reconstruction:</p>
<pre><code> -> high -> decimate -------------> dilute -> high
| pass by 2 high subband by 2 pass \
in | + out
| / =in
-> low -> decimate -------------> dilute -> low
pass by 2 low subband by 2 pass
</code></pre>
<p>This creates two subbands from the input signal, both sampled at half
the original frequency. The filters approximate halfband finite impulse response (FIR) filters
and are determined by the choice of wavelet. Using Daubechies wavelets
(and most others), the data can be reconstructed to the exact original
even when the halfband filters are not perfect. Note that in the above scheme, the total amount of information (samples) stays the same throughout.</p>
<pre><code>Decimation by 2: ABCDEFGHIJKLMNOPQR -> ACEGIKMOQ
Dilution by 2: ACEGIKMOQ -> A0C0E0G0I0K0M0O0Q0
</code></pre>
<p>To get the logarithmic resolution in frequency, the low subband is
re-transformed, and again, the low subband from this transformation
gets the same treatment etc.</p>
<p>Decomposition:</p>
<pre><code> -> high -> decimate --------------------------------> subband0
| pass by 2
in | -> high -> decimate ---------------> subband1
| | pass by 2
-> low -> decim | -> high -> decim -> subband2
pass by 2 | | pass by 2
-> low -> decim |
pass by 2 | . down to what suffices
-> . or if periodic data,
. until short of data
</code></pre>
<p>Reconstruction:</p>
<pre><code>subband0 -----------------------------------> dilute -> high
by 2 pass \
subband1 ------------------> dilute -> high + out
by 2 pass \ / =in
subband2 -> dilute -> high + dilute -> low
by 2 pass \ / by 2 pass
+ dilute -> low
Start . / by 2 pass
here! . -> dilute -> low
. by 2 pass
</code></pre>
<p>In a real-time application, the filters introduce delays, so you need
to compensate them by adding additional delays to less-delayed higher
bands, to get the summation work as intended.</p>
<p>For periodic signals or windowed operation, this problem doesn't exist -
a single subband transformation is a matrix multiplication, with wrapping
implemented in the matrix:</p>
<p>Decomposition:</p>
<pre><code>|L0| |C0 C1 C2 C3 | |I0| L = lowpass output
|H0| |C3 -C2 C1 -C0 | |I1| H = highpass output
|L1| | C0 C1 C2 C3 | |I2| I = input
|H1| = | C3 -C2 C1 -C0 | |I3| C = coefficients
|L2| | C0 C1 C2 C3| |I4|
|H2| | C3 -C2 C1 -C0| |I5|
|L3| |C2 C3 C0 C1| |I6|
|H3| |C1 -C0 C3 -C2| |I7| Daubechies 4-coef:
1+sqrt(3) 3+sqrt(3) 3-sqrt(3) 1-sqrt(3)
C0 = --------- C1 = --------- C2 = --------- C3 = ---------
4 sqrt(2) 4 sqrt(2) 4 sqrt(2) 4 sqrt(2)
</code></pre>
<p>Reconstruction:</p>
<pre><code>|I0| |C0 C3 C2 C1| |L0|
|I1| |C1 -C2 C3 -C0| |H0|
|I2| |C2 C1 C0 C3 | |L1|
|I3| = |C3 -C0 C1 -C2 | |H1|
|I4| | C2 C1 C0 C3 | |L2|
|I5| | C3 -C0 C1 -C2 | |H2|
|I6| | C2 C1 C0 C3| |L3|
|I7| | C3 -C0 C1 -C2| |H3|
</code></pre>
<p>C0, C1, C2, C3 are the "db2" lowpass FIR filter coefficients. Highpass
coefficients you get by reversing tap order and multiplying by
sequence 1,-1, 1,-1, ... Because these are orthogonal wavelets, the
analysis and reconstruction coefficients are the same.</p>
<p>A coefficient set convolved by its reverse is an ideal halfband lowpass
filter multiplied by a symmetric windowing function. This creates the
kind of symmetry in the frequency domain that enables aliasing-free
reconstruction. Daubechies wavelets are the minimum-phase, minimum
number of taps solutions for a number of vanishing moments (seven in
"db7" etc.), which determines their frequency selectivity.</p>
<p>I was asked to show the matrices for 6 coefficients, so here they are, made a bit larger for clarity but could be the same size as before too. Decomposition:</p>
<pre><code>|L0| |C0 C1 C2 C3 C4 C5 | |I0|
|H0| |C5 -C4 C3 -C2 C1 -C0 | |I1|
|L1| | C0 C1 C2 C3 C4 C5 | |I2|
|H1| | C5 -C4 C3 -C2 C1 -C0 | |I3|
|L2| = | C0 C1 C2 C3 C4 C5| |I4|
|H2| | C5 -C4 C3 -C2 C1 -C0| |I5|
|L3| |C4 C5 C0 C1 C2 C3| |I6|
|H3| |C1 -C0 C5 -C4 C3 -C2| |I7|
|L4| |C2 C3 C4 C5 C0 C1| |I8|
|H4| |C3 -C2 C1 -C0 C5 -C4| |I9|
</code></pre>
<p>Reconstruction:</p>
<pre><code>|I0| |C0 C5 C4 C1 C2 C3| |L0|
|I1| |C1 -C4 C5 -C0 C3 -C2| |H0|
|I2| |C2 C3 C0 C5 C4 C1| |L1|
|I3| |C3 -C2 C1 -C4 C5 -C0| |H1|
|I4| = |C4 C1 C2 C3 C0 C5 | |L2|
|I5| |C5 -C0 C3 -C2 C1 -C4 | |H2|
|I6| | C4 C1 C2 C3 C0 C5 | |L3|
|I7| | C5 -C0 C3 -C2 C1 -C4 | |H3|
|I8| | C4 C1 C2 C3 C0 C5| |L4|
|I9| | C5 -C0 C3 -C2 C1 -C4| |H4|
</code></pre>
<p>With:</p>
<pre><code>C0 = 3.326705529500826159985115891390056300129233992450683597084705e-01
C1 = 8.068915093110925764944936040887134905192973949948236181650920e-01
C2 = 4.598775021184915700951519421476167208081101774314923066433867e-01
C3 = -1.350110200102545886963899066993744805622198452237811919756862e-01
C4 = -8.544127388202666169281916918177331153619763898808662976351748e-02
C5 = 3.522629188570953660274066471551002932775838791743161039893406e-02
</code></pre>
<p>More coefficient sets can be found <a href="http://yehar.com/blog/wp-content/uploads/2009/09/daub.h" rel="nofollow">here</a>.</p>
| 203
|
wavelet transform
|
Opposite of wavelet transform?
|
https://dsp.stackexchange.com/questions/24766/opposite-of-wavelet-transform
|
<p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/c4/STFT_and_WT.jpg/500px-STFT_and_WT.jpg" alt="STFT and Wavelet"></p>
<p>Wavelet transform gives good time resolution for high-frequency events and good frequency resolution for low-frequency events. </p>
<p>=> I want to have complete opposite of wavelet transform, where I get good time resolution for low frequency events? Is there any known transform or specific window to get it?</p>
|
<p>wavelet transform also serves to you what you want to do, all you have to do is to first apply a low-pass your signal so as to keep the frequency range you want to scan only, another approach is to transform Gabor where you have to define the size and shape of the analysis window, but I would recommend better using wavelet transform applying a lowpass filter your signal first.</p>
| 204
|
wavelet transform
|
Disadvantages of wavelet transform
|
https://dsp.stackexchange.com/questions/15148/disadvantages-of-wavelet-transform
|
<p>I have a question related to wavelet transform: we know that while the Fourier transform is good for a spectral analysis or which frequency components occurred in signal, it will not give information about at which time it happens. That's why the wavelet transform is suitable for the time-frequency analysis. It is also good for signal denoising, but of course it has some disadvantages.</p>
<p>So I would like to know what are main advantages of the wavelet transform? Is it good for spectral estimation; like finding amplitudes, frequencies and phases, or it just helps us to find discontinuous and irregularities of a signal?</p>
<p>Thanks in advance</p>
|
<p>If you consider the whole set of potential wavelet transforms, then you have a lot of flexibility. </p>
<p>For instance, should you use 1D continuous complex wavelet transforms, by analyzing the modulus and the phase of the scalogram, and provided you use well-chosen wavelets (potentially different for the analysis and the synthesis), and a proper discretization, you can:</p>
<ul>
<li>find discontinuities and irregularities of a signal and its derivatives <a href="https://i.sstatic.net/jg016.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jg016.jpg" alt="enter image description here"></a></li>
<li>find break point location by wavelet ridge extrapolation</li>
<li>denoise</li>
<li>perform matched filtering based on templates (with <a href="http://arxiv.org/abs/1108.4674" rel="nofollow noreferrer">complex continuous</a> or <a href="http://arxiv.org/abs/1405.1081" rel="nofollow noreferrer">discrete dual-tree wavelet</a> frames)</li>
<li><a href="http://www.scholarpedia.org/article/Wavelet-based_multifractal_analysis" rel="nofollow noreferrer">analyse (multi-)fractalty</a></li>
<li>analyse frequencies (with Gabor wavelets for instance)</li>
</ul>
<p>Due to the redundancy, and the quantity of available wavelets (not the same is best for different purposes), they could appear a little less efficient for the analysis of pure stationary and harmonics signals, for which Fourier is better suited.</p>
<p>The main drawbacks are:</p>
<ul>
<li>for fine analysis, it becomes computationaly intensive </li>
<li>its discretization, the discrete wavelet transform (comp. efficient), is less efficient and natural</li>
<li>it take some energy to invest in wavelets to become able to choose the proper ones for a specific purpose, and to implement it correcly.</li>
</ul>
| 205
|
wavelet transform
|
Continuous wavelet transform
|
https://dsp.stackexchange.com/questions/58615/continuous-wavelet-transform
|
<p>Continuous wavelet transformation has been quite widely used for various applications. Most of the papers that I found were using CWT for non-stationary signals. Can we use CWT for stationary signal analysis? if not what are the drawbacks in using Continuous wavelet transform?</p>
|
<p>Stationarity is a multi-fold concept in signal processing. It can denote a wide range of behavior, encompassing deterministic or stochastic aspects. Beyond that, the main question is: do you know if your signal is stationary, and how?</p>
<p>If you actually know how, it is probably wiser to use the generation process to build a custom, adapted model or transformation, and use it for the analysis. </p>
<p>Even in that case, I strongly advocate using different analysis methods in parallel, to help you detect artifacts, issues than you would not detect with a single model. For instance, let us remind that one usually observe only a few realizations of a "signal", and that acquisition issues, outliers, etc. may occur.</p>
<p>Finally, analyzing in first intention a signal with time-frequency or time-scale transforms is a good idea, as it can help you detect the useful scales of interest, estimate parameters of stochastic events, etc.</p>
<p>The drawbacks are:</p>
<ul>
<li>The difficulties in choosing the appropriate wavelet (real or complex), and the associated sampling (and the resulting speed)</li>
<li>The difficulties in interpreting the scalogram, as a knowledge of the underlying processes could be useful</li>
</ul>
| 206
|
wavelet transform
|
Implementing Wavelet Transform using Equations
|
https://dsp.stackexchange.com/questions/8781/implementing-wavelet-transform-using-equations
|
<p>I want to implement Wavelet Transform from the scratch, that mean breaking the wavelet transform into its equations to implement in any Programming language. Matlab Comes with built-in functions to implement Wavelet Transform but It is really hard to understand which processes are exactly involved in the implementation if one wants to develop their own functions.</p>
<p>I know there are Low and High Pass filter involved and the other step called , Down-Sampling but I still have so many doubts on how exactly to combine these filters and samplers to design one's own Wavelet Transform function.</p>
<p>The wavelet tranform block diagram looks like this,</p>
<p><img src="https://i.sstatic.net/d6cFh.png" alt="enter image description here"></p>
<p>but there are so many Wavelet transform like Haar, db1, db2 etc. Which wavelet tranform does this blog diagram defines anyway.</p>
|
<p>I found the <a href="http://grail.cs.washington.edu/pub/stoll/wavelet1.pdf" rel="nofollow">Wavelets for Computer Graphics: A Primer</a> a good introduction to the Haar wavelet and its role in image processing. </p>
| 207
|
wavelet transform
|
Wavelet Transform
|
https://dsp.stackexchange.com/questions/2149/wavelet-transform
|
<p>I want to perform 2D haar discrete wavelet transform and inverse DWT on an image.<strong>Will you please explain 2D haar discrete wavelet transform and inverse DWT in a simple language and an algorithm using which I can write the code for 2D haar dwt</strong>?The information given in google was too technical.I understood the basic things like dividing the image into 4 sub-bands:LL,LH,HL,HH but I can't really understand how to write a program to perform DWT and IDWT on an image.I also read that DWT is better than DCT as it is performed on the image as a whole and then there was some explanation which went over the top of my head.I might be wrong here but I think DWT and DCT compression techniques because the image size reduces when DWT or DCT is performed on them.Hoping you guys share a part of your knowledge and enhance my knowledge.</p>
<p>Thank You</p>
<p>Re:
Does it have anything to do with the image format.What is "value of pixel" that is used in DWT?I have assumed it to be the rgb value of the image.</p>
<pre><code>import java.awt.event.*;
import javax.swing.*;
import java.awt.image.BufferedImage;
import javax.swing.JFrame;
import javax.swing.SwingUtilities;
import java.io.*;
import javax.swing.JFileChooser;
import javax.swing.filechooser.FileFilter;
import javax.swing.filechooser.FileNameExtensionFilter;
import javax.imageio.ImageIO;
import java.awt.*;
import java.lang.*;
import java.util.*;
class DiscreteWaveletTransform
{
public static void main(String arg[])
{ DiscreteWaveletTransform dwt=new DiscreteWaveletTransform();
dwt.initial();
}
static final int TYPE=BufferedImage.TYPE_INT_RGB;
public void initial()
{
try{
BufferedImage buf=ImageIO.read(new File("lena.bmp"));
int w=buf.getWidth();
int h=buf.getHeight();
BufferedImage dwtimage=new BufferedImage(h,w,TYPE);
int[][] pixel=new int[h][w];
for (int x=0;x<h;x++)
{
for(int y=0;y<w;y++)
{
pixel[x][y]=buf.getRGB(x,y);
}
}
int[][] mat = new int[h][w];
int[][] mat2 = new int[h][w];
for(int a=0;a<h;a++)
{
for(int b=0,c=0;b<w;b+=2,c++)
{
mat[a][c] = (pixel[a][b]+pixel[a][b+1])/2;
mat[a][c+(w/2)] = Math.abs(pixel[a][b]-pixel[a][b+1]);
}
}
for(int p=0;p<w;p++)
{
for(int q=0,r =0 ;q<h;q+=2)
{
mat2[r][p] = (mat[q][p]+mat[q+1][p])/2;
mat2[r+(h/2)][p] = Math.abs(mat[q][p]-mat[q+1][p]);
}
}
for (int x=0;x<h;x++)
{
for(int y=0;y<w;y++)
{
dwtimage.setRGB(x,y,mat2[x][y]);
}
}
String format="bmp";
ImageIO.write(dwtimage,format, new File("DWTIMAGE.bmp"));
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
</code></pre>
<p>The output is a black image with a thin line in between,in short nowhere near the actual output.I think I have interpreted the logic wrongly.Please point out the mistakes.
Regards</p>
|
<blockquote>
<p>Will you please explain 2D haar discrete wavelet transform and inverse
DWT in a simple language</p>
</blockquote>
<p>It is useful to think of the wavelet transform in terms of the <a href="http://en.wikipedia.org/wiki/Discrete_Fourier_transform">Discrete Fourier Transform</a> (for a number of reasons, please see below). In the Fourier Transform, you decompose a signal into a series of orthogonal trigonometric functions (cos and sin). It is essential for them to be orthogonal so that it is possible to decompose your signals in a series of coefficients (of two functions that are essentially INDEPENDENT of each other) and recompose it back again.</p>
<p>With this <a href="http://en.wikipedia.org/wiki/Orthonormality">criterion of orthogonality</a> in mind, is it possible to find two other functions that are orthogonal besides the cos and sin? </p>
<p>Yes, it is possible to come up with such functions with the additional useful characteristic that they do not extend to infinity (like the cos and the sin do). One example of such pair of functions is the <a href="http://en.wikipedia.org/wiki/Haar_wavelet">Haar Wavelet</a>.</p>
<p>Now, in terms of DSP, it is perhaps more practical to think about these two "orthogonal functions" as two Finite Impulse Response (FIR) filters and the <a href="http://en.wikipedia.org/wiki/Discrete_wavelet_transform">Discrete Wavelet Transform</a> as a series of Convolutions (or in other words, applying these filters successively over some time series). You can verify this by comparing and contrasting the formulas of the 1-D DWT and <a href="http://en.wikipedia.org/wiki/Convolution">that of convolution</a>.</p>
<p>In fact, if you notice the Haar functions closely you will see the two most elementary low pass and high pass filters. Here is a very simple low pass filter h=[0.5,0.5] (don't worry about the scaling for the moment) also known as a <a href="http://en.wikipedia.org/wiki/Moving_average">moving average filter</a> because it essentially returns the average of every two adjacent samples. Here is a very simple high pass filter h=[1, -1] also known as a <a href="http://en.wikipedia.org/wiki/High_pass_filter">differentiator</a> because it returns the difference between any two adjacent samples.</p>
<p>To perform DWT-IDWT on an image, it is simply a case of using the two dimensional versions of convolution (to apply your Haar filters successively).</p>
<p>Perhaps now you can begin to see where the LowLow,LowHigh,HighLow,HighHigh parts of an image that has undergone DWT come from. HOWEVER, please note that an image is already TWO DIMENSIONAL (maybe this is confusing some times). In other words, you must derive the Low-High Spatial frequencies for the X axis and the same ranges for the Y axis (this is why there are two Lows and two Highs per axis)</p>
<blockquote>
<p>and an algorithm using which I can write the code for 2D haar dwt?</p>
</blockquote>
<p>You must really give it a try to code this on your own from first principles so that you get an understanding of the whole process. It is very easy to find a ready made piece of code that will do what you are looking for but i am not sure that this would really help you in the long term.</p>
<blockquote>
<p>I might be wrong here but I think DWT and DCT compression techniques
because the image size reduces when DWT or DCT is performed on them</p>
</blockquote>
<p>This is where it really "pays" to think of the DWT in terms of the Fourier Transform. For the following reason:</p>
<p>In the Fourier Transform (and of course the DCT as well), you transform MANY SAMPLES (in the time domain) to ONE (complex) coefficient (in the frequency domain). This is because, you construct different sinusoids and cosinusoids and then you multiply them with your signal and obtain the average of that product. So, you know that a single coefficient Ak represents a scaled version of a sinusoid of some frequency (k) in your signal.</p>
<p>Now, if you look at some of the wavelet functions you will notice that they are a bit more complex than the simple sinusoids. For example, consider the Fourier Transform of the High Pass Haar Filter...The high pass Haar filter looks like a square wave, i.e. it has sharp edges (sharp transitions)...What does it take to create SHARP EDGES?.....Many, many different sinusoids and co-sinusoids (!)</p>
<p>Therefore, representing your signal / image using wavelets saves you more space than representing it with the sinusoids of a DCT because ONE set of wavelet coefficients represents MORE DCT COEFFICIENTS. (A slightly more advanced but related topic that might be of help to you to understand why this works this way is <a href="http://en.wikipedia.org/wiki/Matched_filter">Matched Filtering</a>).</p>
<p>Two good online links (in my opinion at least :-) ) are:
<a href="http://faculty.gvsu.edu/aboufade/web/wavelets/tutorials.htm">http://faculty.gvsu.edu/aboufade/web/wavelets/tutorials.htm</a>
and;
<a href="http://disp.ee.ntu.edu.tw/tutorial/WaveletTutorial.pdf">http://disp.ee.ntu.edu.tw/tutorial/WaveletTutorial.pdf</a></p>
<p>Personally, i have found very helpful, the following books:
<a href="http://rads.stackoverflow.com/amzn/click/0124666051">http://www.amazon.com/A-Wavelet-Tour-Signal-Processing/dp/0124666051</a> (By Mallat)
and;
<a href="http://rads.stackoverflow.com/amzn/click/0961408871">http://www.amazon.com/Wavelets-Filter-Banks-Gilbert-Strang/dp/0961408871/ref=pd_sim_sbs_b_3</a> (By Gilbert Strang)</p>
<p>Both of these are absolutely brilliant books on the subject.</p>
<p>I hope this helps </p>
<p>(sorry, i just noticed that this answer may be running a bit too long :-/ )</p>
| 208
|
wavelet transform
|
stationary vs. undecimated wavelet transform
|
https://dsp.stackexchange.com/questions/27836/stationary-vs-undecimated-wavelet-transform
|
<p>I have a little bit confused on the difference between stationary wavelet transform and un-decimated wavelet transform.</p>
<p>So, can anyone tell me, if there is a difference between them?</p>
|
<p>The translation invariant version of the DWT is known by a variety of names, including stationary wavelet transform (SWT), redundant wavelet transform, algorithm à trous, quasi-continuous wavelet transform, translation-invariant wavelet transform, shift invariant wavelet transform, cycle spinning, maximal overlap wavelet transform and undecimated wavelet transform.</p>
<p><strong>Ebadi, Ladan, and Helmi ZM Shafri. "A stable and accurate wavelet-based method for noise reduction from hyperspectral vegetation spectrum." Earth Science Informatics (2014): 1-15.</strong></p>
| 209
|
wavelet transform
|
Wavelet Transform and STFT
|
https://dsp.stackexchange.com/questions/54551/wavelet-transform-and-stft
|
<p>How wavelet transform is different from STFT. </p>
<p>I'm not able to understand what is resolution in frequency domain means?</p>
|
<p><a href="https://i.sstatic.net/SyTNn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SyTNn.png" alt="STFT vs CWT"></a></p>
<p>In the STFT, you apply windowing and Fourier transform on the signal using sliding patches and then combine the resulting transforms, which will help you eventually end up with a uniform time/frequency representation of the signal.</p>
<p>In the wavelet transform case, you apply a filter bank on the overall signal at once. In this way, you obtain a coarse-to fine resolution pattern on the time/frequency representation.</p>
<p>Both methods result in similar time/frequency representations which can be derived from each other.</p>
<p>The major differences:
(1) STFT is uniform yet CWT is not.
(2) You apply STFT on patches, but you apply CWT on the overall signal.
(3) STFT involves Fourier transforms but CWT only requires an orthogonal filter bank.</p>
| 210
|
wavelet transform
|
Transfer functions from wavelet transform
|
https://dsp.stackexchange.com/questions/30895/transfer-functions-from-wavelet-transform
|
<p>So I have this problem where I need to measure the phase of a signal and correct for a delay associated with the travel time of the signal while simultaneously determining the transfer function of my system (with the delay corrected).</p>
<p>So I thought I probably need a wavelet transform so that I can determine when my signal arrives as well as the spectral components of the signal, but my transfer function is supposed to be defined in terms of the fourier transform of my signal (transfer function $H(\omega)$ is defined as $\frac{\mathscr{F}\{S_{out}\}}{\mathscr{F}\{S_{in}\}}$), so my question is:</p>
<ul>
<li>What kind of transform do I need to run on the wavelet transform (i.e. CWT) in order to get the fourier coefficients of my signal at different times? </li>
<li><p>As an additional question to this, is it even necessary to apply a second transform afterwards to get the correct transfer function, or could I perhaps deduce my transfer function directly from the wavelet transform?</p></li>
<li><p>Basically in the shortest amount of words possible, how do I get transfer functions from wavelet transforms?</p></li>
</ul>
| 211
|
|
wavelet transform
|
Relationship between windowed fourier transform and wavelet transform
|
https://dsp.stackexchange.com/questions/13779/relationship-between-windowed-fourier-transform-and-wavelet-transform
|
<p>I was reading on windowed fourier transform and wavelet transform, and i was thinking that the windowed fourier transform is a subset of wavelet transform. Is that true?</p>
|
<p>Define the Fourier transform as <span class="math-container">$$ x(t) = \mathscr{F}^{-1}\big\{ X(\omega) \big\} \triangleq\frac{1}{2 \pi} \int_{-\infty}^{\infty} X(\omega) e^{j\omega t} \ d\omega $$</span></p>
<p>and <span class="math-container">$$ X(\omega) = \mathscr{F}\big\{ x(t) \big\} = \int_{-\infty}^{\infty} x(t) e^{-j\omega t} \ dt $$</span>.</p>
<p>Define the real-valued and non-negative window function <span class="math-container">$w(t) \ge 0$</span>: <span class="math-container">$$ \int_{-\infty}^{\infty} w^2(u) \ du = \int_{-\infty}^{\infty} \frac{1}{a} w^2\left( \frac{u}{a} \right) \ du = \int_{-\infty}^{\infty} \frac{1}{a} w^2\left( \frac{t-\tau}{a} \right) \ d\tau \triangleq \frac{c_g}{2 \pi} \quad \forall a \ne 0 \quad \forall t$$</span> .</p>
<p>Call this Eq (1): <span class="math-container">$$ \begin{align}
x(t) & = x(t) \left[ \frac{2 \pi}{c_g} \int_{-\infty}^{\infty} \frac{1}{a} w^2\left( \frac{t-\tau}{a} \right) \ d\tau \right] \\
& = \frac{2 \pi}{c_g} \int_{-\infty}^{\infty} \left[ x(t) \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \right] \ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \ d\tau \\
\end{align} \quad \forall a>0 $$</span> .</p>
<p>Define the Short-Time Fourier Transform (STFT) as <span class="math-container">$$ X_{a,\tau}(\omega) \triangleq \mathscr{F}\left\{ x(t) \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \right\} = \int_{-\infty}^{\infty} x(t) \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) e^{-j\omega t} \ dt $$</span> .</p>
<p>That means that <span class="math-container">$$ x(t) \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) = \mathscr{F}^{-1}\big\{ X_{a,\tau}(\omega) \big\} = \frac{1}{2 \pi} \int_{-\infty}^{\infty} X_{a,\tau}(\omega) e^{j\omega t} \ d\omega $$</span> .</p>
<p>From Eq (1), <span class="math-container">$$ \begin{align}
x(t) & = \frac{2 \pi}{c_g} \int_{-\infty}^{\infty} \left[ x(t) \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \right] \ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \ d\tau \\
& = \frac{2 \pi}{c_g} \int_{-\infty}^{\infty} \left[ \frac{1}{2 \pi} \int_{-\infty}^{\infty} X_{a,\tau}(\omega) e^{j\omega t} \ d\omega \right] \ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \ d\tau \\
& = \frac{1}{c_g} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} X_{a,\tau}(\omega) e^{j\omega t} \ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \ d\omega \ d\tau \\
& = \frac{1}{2 \pi} \int_{-\infty}^{\infty} \left[ \frac{2 \pi}{c_g} \int_{-\infty}^{\infty} X_{a,\tau}(\omega) \ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \ d\tau \right] \ e^{j\omega t} \ d\omega \\
& = \frac{1}{2 \pi} \int_{-\infty}^{\infty} X(\omega) \ e^{j\omega t} \ d\omega \\
\end{align} $$</span> .</p>
<p>This means that <span class="math-container">$$ X(\omega) = \frac{2 \pi}{c_g} \int_{-\infty}^{\infty} X_{a,\tau}(\omega) \ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \ d\tau \quad \forall a>0 $$</span></p>
<p>Since this is true for all <span class="math-container">$a>0$</span>, we choose <span class="math-container">$a=\frac{1}{\omega}$</span> which means <span class="math-container">$$ X(\omega)=0 \quad \forall \omega \le 0 $$</span> and consequently <span class="math-container">$$ \Im\big[ x(t) \big] = \mathscr{H} \left\{ \Re\big[ x(t) \big] \right\} $$</span></p>
<p>where <span class="math-container">$\mathcal{H} \{ \cdot \} \ $</span> is the Hilbert Transform.</p>
<p>So we have <span class="math-container">$$ x(t) = \frac{1}{c_g} \int_{-\infty}^{\infty} \int_{0}^{\infty} X_{1/\omega,\tau}(\omega) e^{j\omega t} \ \sqrt{\omega} w\left( \omega (t-\tau) \right) \ d\omega \ d\tau $$</span> .</p>
<p>Substituting in the integral: <span class="math-container">$\frac{1}{a} \rightarrow \omega$</span> and <span class="math-container">$\frac{-1}{a^2} da \rightarrow d\omega$</span>,</p>
<p><span class="math-container">$$ \begin{align}
x(t) & = \frac{1}{c_g} \int_{-\infty}^{\infty} \int_{0}^{\infty} X_{a,\tau}\left(\frac{1}{a}\right) e^{j\frac{t}{a}} \ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) \frac{1}{a^2} \ da \ d\tau \\
& = \frac{1}{c_g} \int_{-\infty}^{\infty} \int_{0}^{\infty} \left[ X_{a,\tau} \left(\frac{1}{a}\right) e^{j\frac{\tau}{a}} \right] \ \left[ \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) e^{j\frac{t-\tau}{a}} \right] \frac{1}{a^2} \ da \ d\tau \\
\end{align} $$</span></p>
<p>where <span class="math-container">$$ g_{a,\tau}(t) \triangleq \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) e^{j\frac{t-\tau}{a}} $$</span> is the Wavelet,</p>
<p><span class="math-container">$$ g(t) \triangleq g_{1,0}(t) = w(t) e^{jt} $$</span> is the Mother Wavelet, and</p>
<p><span class="math-container">$$ g_{a,\tau}(t) = \frac{1}{\sqrt{a}} g\left( \frac{t-\tau}{a} \right) $$</span>,</p>
<p>and</p>
<p><span class="math-container">$$ \begin{align}
X_{a,\tau} \left(\frac{1}{a}\right) e^{j\frac{\tau}{a}} & = \int_{-\infty}^{\infty} x(t) \frac{1}{\sqrt{a}} w\left( \frac{t-\tau}{a} \right) e^{-j\frac{t-\tau}{a}} \ dt \\
& = \int_{-\infty}^{\infty} x(t) \ g_{a,\tau}^*(t) \ dt \\
\end{align} $$</span></p>
<p>is the Continuous Wavelet Transform of <span class="math-container">$x(t)$</span>. The scaler <span class="math-container">$c_g$</span> is</p>
<p><span class="math-container">$$ c_g = 2 \pi \int_{-\infty}^{\infty} w^2(t) \ dt = 2 \pi \int_{-\infty}^{\infty} \big| g(t) \big|^2 \ dt = \int_{-\infty}^{\infty} \big| G(\omega) \big|^2 \ d\omega $$</span> .</p>
| 212
|
wavelet transform
|
Does the Fast Wavelet Transform produce the same coefficient as the Discrete Wavelet Transform?
|
https://dsp.stackexchange.com/questions/71394/does-the-fast-wavelet-transform-produce-the-same-coefficient-as-the-discrete-wav
|
<p>Does the Fast Wavelet Transform(FWT) produce the same coefficients as the Discrete Wavelet Transform(DWT) if configured for the same depths? Or is the the FWT just an approximation of the DWT?</p>
|
<p>If the discrete wavelet transform can be implemented with a FIR filter bank, with appropriate extensions, yes, up to numerical precision, coefficients will be the same.</p>
<p>If the discrete wavelet transform possesses a non finite support, then a FIR filter bank implementation would require filter truncation, and the results may differ. On those case, like for spline wavelets, the processing is often performed in the Fourier domain.</p>
<p>There exist IIR discrete wavelets, which may bridge this gap, yet I am not familiar enough to tell.</p>
| 213
|
wavelet transform
|
Should i use window with hop_size in Wavelet Transform or Discrete Wavelet Transform?
|
https://dsp.stackexchange.com/questions/78846/should-i-use-window-with-hop-size-in-wavelet-transform-or-discrete-wavelet-trans
|
<p>I have a signal (audio - voice) with 1 second of duration with sample rate of 50000 Hz. It is big signal and I wish extract some features and apply pattern recognition or classification.</p>
<p>My question is if the Wavelet transform or Discrete Wavelet transform is a time frequency representation (or time scale). So I shoudn't use window in signal as a buffer or like STFT? Or I should use window like STFT with hop_size and apply to every window a wavelet transform?</p>
<p>I think STFT use window to localize signal in time and see frequency content. Wavelet doesn't need this approach.</p>
<p>I try to compare this feature extraction with well know mel frequency spectrogram or Mel-frequency cepstral coefficients (MFCCs).</p>
<p>Sorry if there is any answer on this, I haven't found it.</p>
<p>(taking advantage of the opportunity if anyone wants to explain to me how filter bank (or discrete wavelet) located spectral content in time. Is it property of convolution?)</p>
|
<p>Basically, an analysis linear filter-bank is composed of several branches of convolutive filters, each branch with its own hop. The theory consists in finding under which the filter-bank is invertible, how to design the filters and choose the hops.</p>
<p>Each level of a dyadic discrete wavelet transform is a filter-bank block with a hop size of <span class="math-container">$2$</span> (downsampling by <span class="math-container">$2$</span>) and an implicit window determined from the envelope of the low-pass and high-pass filters of each branch. With multi-band wavelets, the hop is an integer <span class="math-container">$M\ge 2$</span>.
When you cascade the basic wavelet blocks, things get more intricate, as you will have iterated convolutions of the above filters (which are thus localized) and combinations of the undersampling rates: hop sizes of <span class="math-container">$2$</span>, <span class="math-container">$4$</span>, <span class="math-container">$8$</span>, <span class="math-container">$2^L$</span>.
Therefore, discrete wavelets inherently have windows and hops, albethey of different shapes and sizes.</p>
<p>For speech, which I am not practitioner of, it is not uncommon to use several STFT with different lengths: short and longer windows. The overlapping sizes are often (as far as I know) <span class="math-container">$1/4$</span>, <span class="math-container">$1/2$</span> or <span class="math-container">$3/4$</span> of the number of frequency bins.</p>
| 214
|
wavelet transform
|
Clarification regarding discrete wavelet transform
|
https://dsp.stackexchange.com/questions/61728/clarification-regarding-discrete-wavelet-transform
|
<p>One of the books on "Conceptual Wavelets" by Fugal explains some major differences between the undecimated discrete wavelet transform (UDWT) vs. discrete wavelet transform (DWT). In UDWT the scale of wavelet is increased continuously just like the continuous wavelet transform, but the scale increases in dyads (powers of 2). In the DWT, which is the most commonly used in MATLAB, the filter size remains the same, but the data is reduced dyadically. </p>
<p>His exact wordings are "<em>As discussed briefly in the preview, instead of dyadically stretching the filters, the conventional (decimated) DWT dyadically shrinks the signal instead</em>"</p>
<p>He is using the example of Haar wavelets on a very small set of data, simply exam scores as a signal of eight exam scores [ 80 80 80 80 0 0 0 0].</p>
<p>So the question is when we downsample by 2, which one is more correct to throw away even samples or odd samples?</p>
|
<p>Assuming that you have a sufficient number of samples, throwing away odd / even samples does not matter.</p>
<p>The DWT can be thought of as measuring time/frequency content with varying levels of time/frequency resolution. For non frequency varying signals (like chirps), the odd and even samples both contain the same frequency and time content. (Unless we are talking about some explosive transient that occurs in one sample) then you would need some a priori knowledge about that.</p>
<p>If the signal is of odd length, taking the odd samples would give you one more sample than the even samples but wouldn't make a difference in terms of information gained.</p>
| 215
|
wavelet transform
|
Shifting of Shift-Invariant Wavelet Transforms
|
https://dsp.stackexchange.com/questions/14086/shifting-of-shift-invariant-wavelet-transforms
|
<p><strong>Main Question: Why would iterative wavelet/inverse-wavelet transforms cause a shift along the x-axis for undecimated (shift-invariant) wavelet transforms?</strong></p>
<p>I am attempting to remove backgrounds from signals using an iterative wavelet transform method similar to this approach which I found in an article:</p>
<p><img src="https://i.sstatic.net/tglZY.jpg" alt="Article description"></p>
<p>However, I am receiving this output from my python program described below:</p>
<p><img src="https://i.sstatic.net/COyiv.jpg" alt="shifting"></p>
<p>I don't understand why the inverse wavelet is shifting every iteration to the right. What could cause this?</p>
<p>Here is the script I am using to produce this output:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import mlpy.wavelet as wave
# This fucntion should be fine
# Make some random data with peaks and noise
def gen_data():
def make_peaks(x):
bkg_peaks = np.array(np.zeros(len(x)))
desired_peaks = np.array(np.zeros(len(x)))
# Make peaks which contain the data desired
# (Mid range/frequency peaks)
for i in range(0,10):
center = x[-1] * np.random.random() - x[0]
amp = 100 * np.random.random() + 10
width = 10 * np.random.random() + 5
desired_peaks += amp * np.e**(-(x-center)**2/(2*width**2))
# Also make background peaks (not desired)
for i in range(0,3):
center = x[-1] * np.random.random() - x[0]
amp = 80 * np.random.random() + 10
width = 100 * np.random.random() + 100
bkg_peaks += amp * np.e**(-(x-center)**2/(2*width**2))
return bkg_peaks, desired_peaks
# make x axis
x = np.array(range(0, 1000))
bkg_peaks, desired_peaks = make_peaks(x)
avg_noise_level = 30
std_dev_noise = 10
size = len(x)
scattering_noise_amp = 100
scat_center = 100
scat_width = 15
scat_std_dev_noise = 100
y_scattering_noise = np.random.normal(scattering_noise_amp, scat_std_dev_noise, size) * np.e**(-(x-scat_center)**2/(2*scat_width**2))
y_noise = np.random.normal(avg_noise_level, std_dev_noise, size) + y_scattering_noise
y = bkg_peaks + desired_peaks + y_noise
xy = np.array( zip(x,y), dtype=[('x',float), ('y',float)])
return xy
# Random data Generated
#############################################################
#############################################################
# Wavelet Transformations
#############################################################
xy = gen_data()
# Make 2**n amount of data
new_y, bool_y = wave.pad(xy['y'])
orig_mask = np.where(bool_y==True)
# wavelet transform parameters
levels = 8
wf = 'h'
k = 2
# Remove Noise first
# Wave transform
wt = wave.uwt(new_y, wf, k, levels)
# Matrix of the difference between each wavelet level and the original data
diff_array = np.array([(wave.iuwt(wt[i:i+1], wf, k)-new_y) for i in range(len(wt))])
# Index of the level which is most similar to original data (to obtain smoothed data)
indx = np.argmin(np.sum(diff_array**2, axis=1))
# Use the wavelet levels around this region
noise_wt = wt[indx:indx+1]
# smoothed data in 2^n length
new_y = wave.iuwt(noise_wt, wf, k)
# Background Removal
error = 10000
errdiff = 100
i = -1
iter_y_dict = {0:np.copy(new_y)}
bkg_approx_dict = {0:np.array([])}
while abs(errdiff)>=1*10**-24:
i += 1
# Wave transform
wt = wave.uwt(iter_y_dict[i], wf, k, levels)
# Assume last slice is lowest frequency (background approximation)
bkg_wt = wt[-3:-1]
bkg_approx_dict[i] = wave.iuwt(bkg_wt, wf, k)
# Get the error
errdiff = error - sum(iter_y_dict[i] - bkg_approx_dict[i])**2
error = sum(iter_y_dict[i] - bkg_approx_dict[i])**2
# Make every peak higher than bkg_wt
diff = (new_y - bkg_approx_dict[i])
peak_idxs_to_remove = np.where(diff>0.)[0]
iter_y_dict[i+1] = np.copy(new_y)
iter_y_dict[i+1][peak_idxs_to_remove] = np.copy(bkg_approx_dict[i])[peak_idxs_to_remove]
# new data without noise and background
new_y = new_y[orig_mask]
bkg_approx = bkg_approx_dict[len(bkg_approx_dict.keys())-1][orig_mask]
new_data = diff[orig_mask]
#############################################################
# This part should be fine
# Plot the data and results
#############################################################
fig = plt.figure()
ax_raw_data = fig.add_subplot(121)
ax_WT = fig.add_subplot(122)
ax_raw_data.plot(xy['x'], xy['y'], 'g')
for bkg in bkg_approx_dict.values():
ax_raw_data.plot(xy['x'], bkg[orig_mask], 'k')
ax_WT.plot(xy['x'], new_data, 'y')
fig.tight_layout()
plt.show()
</code></pre>
| 216
|
|
wavelet transform
|
Nonlinear wavelets transform?
|
https://dsp.stackexchange.com/questions/12926/nonlinear-wavelets-transform
|
<p>Is wavelet a Nonlinear transform, or Not?<br>
specifically, continuous wavelet transform with morlet function.<br>
I am studying behavior of a dynamic system, and it has nonlinear behaviour. can I employ wavelet transform? </p>
|
<p>A transform being linear has very little to do with its ability to analyze linear or nonlinear systems.</p>
<p>The wavelet transform $W[s(t)]$ of a signal $s(t)$ is linear because $$W[a s_1(t) + b s_2(t)]=a W[s_1(t)]+b W[s_2(t)]$$ for real or complex $a$ and $b$.</p>
<p>The signal you're analyzing is just a signal, it has no concept of linearity. However, if you try to come to conclusions about system properties of a nonlinear system, then you cannot break the analysis down to just a set of base signals to understand the system. In the worst case you would have to look at every possible intput/output pair. Often this can be simplified using known system properties like symmetries (i.e. time invariance).</p>
| 217
|
wavelet transform
|
Where is the mother wavelet defined in the Fast Wavelet Transform?
|
https://dsp.stackexchange.com/questions/71263/where-is-the-mother-wavelet-defined-in-the-fast-wavelet-transform
|
<p>Referring to the <a href="https://en.wikipedia.org/wiki/Fast_wavelet_transform#cite_note-1" rel="nofollow noreferrer">Fast Wavelet Transform</a>, this transform is implemented as a QMF filter bank. This algorithm consists of high/low pass filtering and subsampling. However, a wavelet transform is typically defined by the mother wavelet - the master basis function that is shifted and scaled.</p>
<p>How exactly does this fast wavelet transform define the mother wavelet? I don't see any mentioning of it.</p>
<p><a href="https://i.sstatic.net/phx4l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phx4l.png" alt="enter image description here" /></a></p>
|
<p>TL;DR: the wavelet appears at the end of the synthesis filter bank, iterated infinitely.</p>
<p>Theoretically founded, practical and fast DSP tools are often derived from continuous theory: think about how the DFT is derived from the continuous Fourier transform, by discretizing both in time (like the Discrete-time Fourier transform) and frequency (like Fourier series), which are dual variables.</p>
<p>For a function <span class="math-container">$\psi(t)$</span> to be a wavelet, very few conditions (admissibility) are required. However, discretizing the complementary shift and scale parameters <span class="math-container">$(a,b)$</span> of <span class="math-container">$\psi\left(\frac{t-b}{a}\right)$</span> is more complicated if one want to remain invertible. And even more complicated to obtain a discrete orthogonal basis.</p>
<p>In other words: apart from a handful of well-known cases (Haar, Shannon, Meyer wavelets), if one have a closed-form formula of a nice admissible continuous wavelet, it is VERY unlikely that it can be discretized as a discrete orthogonal wavelet.</p>
<p>The filter bank structure does the job the other way around. If we assume an orthogonal multiresolution framework, we see that an iterated bank of carefully-designed filters can produce a wavelet analysis. Which wavelet? Theoretically, from a two-scale equation, which generally has no closed-form solution.</p>
<p>You can obtain a good approximation of the wavelet shape with the following procedure. Pick a level <span class="math-container">$L$</span>. In the deepest detail subband (near the approximation), put a one, and zero elsewhere. Then, do the inverse wavelet transform. The highest the level, the better the approximation.Here is an example with a very q&d code.</p>
<p><a href="https://i.sstatic.net/EW344.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EW344.png" alt="Daubechies 3 wavelets from different levels" /></a></p>
<pre><code>dwtmode per
nSample = 1024;
data = zeros(nSample,1);
waveletName = 'db3';
for iLevel = 2:7
[C,L]=wavedec(data,iLevel,waveletName);
C(L(1)+(L(2)+L(3))/2) = 1;
% C((L(1))/2) = 1;
waveletAtLevel = waverec(C,L,waveletName);
subplot(2,3,iLevel-1)
plot(waveletAtLevel(find(waveletAtLevel)));axis tight;grid on ;
xlabel(['At level ',num2str(iLevel)])
end
</code></pre>
| 218
|
wavelet transform
|
Advantage of STFT over wavelet transform
|
https://dsp.stackexchange.com/questions/79586/advantage-of-stft-over-wavelet-transform
|
<p>I have learned about STFT and wavelet transform recently, and wavelet transform seems better than STFT in my opinion.
So, I wonder if there is any advantage of using STFT than WT, and if so, what are practical applications of STFT?</p>
|
<p>Wavelet transforms and short-term/short-time Fourier transforms are broad names for classes of transformations that are not totally distinct and may overlap (pun intended).</p>
<p>Both can be efficient for non-stationary features of data, and they both have merits or drawbacks, depending on their parameters and signal's properties. STFT is typically analyzing signals on fixed-length windows with different modulations, while wavelets are similar modulations (zero-crossing) on different support sizes.</p>
<p>I am a promoter of wavelet-type methods. I however should mention that in image and audio <strong>JPEG and mp3 are widely spread standards akin the STFT</strong> (fixed length), in their critical version (maximally sub-sampled). Wavelets, although at the base of the JPEG2000, is less used, possibly for implementation/use issues.</p>
<p>In video coding, and in deep learning, it is more customary to look at different resolutions (akin to wavelets), yet not exactly in the structured dyadic wavelet fashion.</p>
| 219
|
wavelet transform
|
Discrete Wavelet Transform (DWT) and wavelet family
|
https://dsp.stackexchange.com/questions/76594/discrete-wavelet-transform-dwt-and-wavelet-family
|
<p>I have just started reading about wavelets for a data compression problem that I want to perform. I am reading about Discrete Wavelet Transform (DWT) but I can't understand where the wavelet family that has to be set is used.</p>
<p>This is the DWT schema</p>
<p><a href="https://i.sstatic.net/rr56r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rr56r.png" alt="][1]][1]" /></a></p>
<p>I do not understand where is the family wavelet used if only low-pass and high-pass filters are being applied and subsampling. There is a step I'm missing or I am lost.</p>
<p>Thanks for the help.</p>
|
<p>There are actually four filters involved:</p>
<ul>
<li>2 for the decomposition of signals [the h[n] and g[n] in the diagram above]</li>
<li>2 for the reconstruction of signals</li>
</ul>
<p>The diagram you are showing is only for signal decomposition. There is a corresponding diagram for signal reconstruction which involves upsampling the coefficients by inserting zeros, then passing them through reconstruction low pass and high pass filters, and then summing the approximation and detail components.</p>
<p>The four filters together form a perfect reconstruction filter bank.</p>
<ul>
<li>dec_lo (decompostion low pass filter), dec_hi (decomposition high pass filter)</li>
<li>rec_lo (reconstruction low pass filter), rec_hi (reconstruction high pass filter)</li>
</ul>
<p>For orthogonal wavelets, these filters have a specific relationship:</p>
<ul>
<li>dec_lo = rec_lo[::-1]</li>
<li>rec_hi = qmf(rec_lo)</li>
<li>dec_hi = rec_hi[::-1]</li>
</ul>
<p>where <code>qmf</code> stands for quadrature mirror filter:</p>
<pre><code>def qmf(h):
g = h[::-1]
g[1::2] = -g[1::2]
return g
</code></pre>
<p>Thus, if you have chosen a <code>rec_lo</code> filter properly, all other filters are automatically derived from it. This discussion is limited to orthogonal wavelets.</p>
<p>A wavelet family essentially describes such filter banks. Each member of a wavelet family corresponds to a unique filter bank. Every family of wavelets has some unique features [like the number of vanishing moments of the scaling and wavelet functions, symmetry in the wavelet, etc.].</p>
<p>The wavelet or scaling functions are not directly used in the DWT or IDWT. They characterize the filter banks. However, if you pass a specific impulse function as input to the DWT, you will get the scaling or wavelet function at the appropriate scale and location as output.</p>
| 220
|
wavelet transform
|
Wavelet transform of a spatial convolution
|
https://dsp.stackexchange.com/questions/52839/wavelet-transform-of-a-spatial-convolution
|
<p>Does anyone know if there exist a kind of convolution theorem for the discrete wavelet transform (decimated or undecimated)? </p>
<p>In other words can I find a simple form of
<span class="math-container">$W\left[ \int f(t) g(x-t) \, dt\right] $</span> where <span class="math-container">$W$</span> is the discrete wavelet transform operator?</p>
|
<p>I cannot say I have a clear understanding of this at this time. However, a few pointers. I'd love to see somebody provide a detailed account. Others bits at: <a href="https://dsp.stackexchange.com/a/31590/15892">Multiplication in the wavelet domain, what does it look like in real space?</a></p>
<ul>
<li><p>The nonexistence of a wavelet function admitting a wavelet convoluton theorem of the Fourier type, 1994, A. R. Lindsey, unpublished report</p></li>
<li><p><a href="https://doi.org/10.1109/ICASSP.1996.543662" rel="nofollow noreferrer">Convolution using the undecimated discrete wavelet transform</a>, 1996</p></li>
</ul>
<blockquote>
<p>Convolution is one of the most widely used digital signal processing
operations. It can be implemented using the fast Fourier transform
(FFT), with a computationalcomplexity of <span class="math-container">$O(N \log N)$</span>. The
undecimated discrete wavelet transform (UDWT) is linear and shift
invariant, so it can also be used to implement convolution. In this
paper, we propose a scheme to implement the convolution using the
UDWT, and study its advantages and limitations.</p>
</blockquote>
<ul>
<li><a href="https://doi.org/10.1109/78.720385" rel="nofollow noreferrer">Convolution theorems for linear transforms</a>, 1998</li>
</ul>
<blockquote>
<p>This correspondence explores the existence of convolution theorem for
linear transformations under a variety of different assumptions. There
are eight convolution theorems, all Fourier-related with only N
operations in the transform domain and no ordering constraints on the
convolution components in the result. They include circular
convolutions and correlations.</p>
</blockquote>
<ul>
<li><a href="https://doi.org/10.1016/j.sigpro.2003.07.014" rel="nofollow noreferrer">The convolution theorem for the continuous wavelet tranform</a>, 2004</li>
</ul>
<blockquote>
<p>We study the application of the continuous wavelet transform to
perform signal filtering processes. We first show that the convolution
and correlation of two wavelet functions satisfy the required
admissibility and regularity conditions. By using these new wavelet
functions to analyze both convolutions and correlations, respectively,
we derive convolution and correlation theorems for the continuous
wavelet transform and show them to be similar to that of other joint
spatial/spatial–frequency or time/frequency representations. We then
investigate the effect of multiplying the continuous wavelet transform
of a given signal by a related transfer function and show how to
perform spatially variant filtering operations in the wavelet domain.
Finally, we present numerical examples showing the usefulness of
applying the convolution theorem for the continuous wavelet transform
to perform signal restoration in the presence of additive noise.</p>
</blockquote>
<ul>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48.9577&rep=rep1&type=pdf" rel="nofollow noreferrer">On the uniqueness of the convolution theorem for the Fourier transform</a>, 2008</li>
</ul>
<blockquote>
<p>This paper shows that members of the fourier transform family are the
only linear transforms that have a convolution theorem, that is, that
can replace <span class="math-container">$O(N^2)$</span> operations of a convolution in a time domain by
<span class="math-container">$O(N)$</span> operations in a transform domain. Generally, there is an
additional cost to compute the transform itself. Our observation is
motivated by recent activity in wavelet and subband decompositions and
related spectral analyses, which are attractive alternatives for
signal compression applications. A natural question when using such
techniques is to determine if convolutions of <span class="math-container">$N$</span>-point signals can be
calculated with fewer operations in a compressed transform domain than
in an uncompressed time domain. The answer is negative for a broad set
of assumptions. This paper indicates what assumptions must be relaxed
in seeking a linear transform that has a convolution theorem
comparable to the convolution theorem for Fourier transforms.</p>
</blockquote>
| 221
|
wavelet transform
|
Discrete wavelet transform disadvantages
|
https://dsp.stackexchange.com/questions/61213/discrete-wavelet-transform-disadvantages
|
<p>I read in <a href="https://books.google.com.eg/books?id=49FBDwAAQBAJ&pg=PA79&lpg=PA79&dq=DWT+shift+variance+property+due+to+the+downsampling+process+lack+of+directional+selectivity.&source=bl&ots=wnhZpeTQcY&sig=ACfU3U3heH_2sefjO995Jqn52pJ7udyrug&hl=en&sa=X&ved=2ahUKEwjtqtG9xZnlAhUAD2MBHb-FAC8Q6AEwBHoECAkQAQ#v=onepage&q=DWT%20shift%20variance%20property%20due%20to%20the%20downsampling%20process%20lack%20of%20directional%20selectivity.&f=false" rel="nofollow noreferrer">a paper</a> that the discrete wavelet transform (DWT) has two disadvantages
The first one is the shift variance property due to the downsampling process. Could you please help me understanding why downsampling leads to shift variance?
The second disadvantage is the lack of directional selectivity. Why does DWT lacks directional selectivity?</p>
|
<p>A <span class="math-container">$2$</span>-channel stage of a wavelet transform, combines two filters in parallel, followed by a down-sampling by two. The later is the cause for shift-invariance, as the filters are time-invariant. Signals <span class="math-container">$$x_0[n] = \{\ldots,0,1,0,1,0,1,\ldots\}$$</span> and <span class="math-container">$$x_1[n] = \{\ldots,1,0,1,0,1,0,\ldots\}$$</span> which are shifted by only one sample, yield respectively <span class="math-container">$$y_0[n] = \{\ldots,0,0,0,0,0,0,\ldots\}$$</span> and <span class="math-container">$$y_1[n] = \{\ldots,1,1,1,1,1,1,\ldots\}$$</span></p>
<p>Answers to down-sampling a thus not shift-invariant. This happens even with low-pass or high-pass filters. In an <span class="math-container">$L$</span>-level wavelet decomposition, the overall wavelet filter bank is invariant to multiples of <span class="math-container">$2^L$</span> shift, not the intermediate one.</p>
<p>When applied in 2D, classical wavelet schemes, by simplicity, apply 1 DWT on rows and columns of the image, in a separate way. The resulting 2D wavelet filter is of rank one, very poor at separating directions other than vertical or horizontal. However, genuine 2D wavelets with better directional selectivity exist. The paper <a href="https://doi.org/10.1016/j.sigpro.2011.04.025" rel="nofollow noreferrer">A panorama on multiscale geometric representations, intertwining spatial, directional and frequency selectivity</a>, Signal Processing, 2011, is devoted to that topic.</p>
| 222
|
wavelet transform
|
Wavelet transform 3D plot for CoP
|
https://dsp.stackexchange.com/questions/31936/wavelet-transform-3d-plot-for-cop
|
<p>I'm trying to perform wavelet transform and make a 3D plot like :</p>
<p><a href="https://i.sstatic.net/GHooq.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GHooq.gif" alt="enter image description here"></a></p>
<p>With the wavelet transform function :</p>
<p>$$
\textrm{CWT}_x^\psi (\tau, s)=\frac{1}{\sqrt{\lvert s\rvert}}\int x(t)\psi\left(\frac{t-\tau}{s}\right)dt
$$</p>
<p>Where $t$ is translation and $s$ is scale.</p>
<p>These are MATLAB and Python functions for wavelet transform:</p>
<ul>
<li><p>MATLAB:<pre>[coefs,sgram,frequencies] = cwt(x,scales,wname, samplingperiod,'scale')</pre> </p></li>
<li><p>Python:<pre>
pywt.wavedec(data, wavelet, mode='sym', level=None)
(cA, cD) = dwt(data, wavelet, mode='sym')
scipy.signal.cwt(data, wavelet, widths)</pre></p></li>
</ul>
<p>I know to analyze the signal I have to move the wavelet (translation) to cover all of the signal. The functions of both MATLAB and Python need scales as parameter but there is nothing about translation. The $x$-axis is scales, the $y$-axis is translation.</p>
<ol>
<li>I assumed $z$ is 2D (surface) because I need the coloring but I dont know what it is. Is it coefficients ? </li>
<li>what are approximation and detail coefficients ?</li>
<li>And what's translation? Is it one to length of my data array (number of data points) ?</li>
</ol>
<p>I'm new in DSP and I'm confused if anyone can help me I'll appreciate it.</p>
<p>Update :</p>
<p>my data:</p>
<pre><code>0.01009
0.010222
0.010345
0.010465
0.010611
0.010768
0.01089
0.011049
0.011206
0.011329
0.011465
0.011613
0.011763
0.011888
0.012015
0.012154
0.012282
0.012408
0.012524
0.012664
0.012791
0.012918
0.013043
0.013157
0.013284
0.0134
0.013516
0.013666
0.013793
0.013909
0.014024
0.014143
0.014271
0.014398
0.014515
0.014618
0.014722
0.01484
0.014957
0.015075
0.015192
0.015298
0.01539
0.015493
0.015598
0.015695
0.015776
0.015884
0.015978
0.016073
0.016157
0.016254
0.016363
0.016473
0.016572
0.016694
0.016803
0.016913
0.017021
0.017154
0.017242
0.017342
0.01745
0.017555
0.017648
0.017743
0.017851
0.017957
0.018065
0.018194
0.01831
0.018439
0.018582
0.018713
0.018843
0.018995
0.019137
0.0193
0.019464
0.019625
0.019781
0.019945
0.020124
0.020304
0.020447
0.020619
0.020762
0.020931
0.021088
0.021254
0.021398
0.021531
0.021648
0.021814
0.021965
0.022109
0.022251
0.022408
0.022563
0.022748
</code></pre>
<p>I used <code>morlet</code> wavelet , <code>1:150</code> scale and I got this result:</p>
<p><a href="https://i.sstatic.net/1it5g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1it5g.png" alt="enter image description here"></a></p>
<p>I get trough at scales <code>50, 150 , 250 , ...</code> and peaks at <code>100, 200, 300 , ...</code>
Why?</p>
|
<p>Basically, I see a plot of a 2D function of discretized scale and translation parameters. Instead of a smooth 2D surface, it looks like 1D plots of coefficients at all scales $s_n$, put behind each other along each location $\tau_n$ on the translation axis. And each 1D plot is colored in a level-set fashion: the "vertical" coloring below the 1D curve is related to the amplitude, potentially with color cycling: red for low amplitudes, then yellow, green, cyan, blue, magenta, red (for high amplitudes). This is apparently an instance of a <a href="https://en.wikipedia.org/wiki/Waterfall_plot" rel="nofollow noreferrer">waterfall plot</a>:</p>
<blockquote>
<p>curves are staggered both across the screen and vertically, with
'nearer' curves masking the ones behind</p>
</blockquote>
<p>with amplitude coloring. So:</p>
<ol>
<li>Coefficients (absolute value) gives you the height of the 1D curve (top) and the coloring below.</li>
<li>You don't have approximations and details with CWT. This is not DWT. "Only" low-scale to high-scale "detail" coefficients. There is not father wavelet or scaling function in that case.</li>
<li>Yes, without special settings, standard wavelet codes compute coefficients at each sample.</li>
</ol>
<p>Alternatively, you can draw <a href="https://stackoverflow.com/questions/8544823/how-to-make-colour-indicate-amplitude-in-matlabs-ribbon-plot">ribbon plots</a>, with a Matlab code for the color (<a href="http://uk.mathworks.com/matlabcentral/fileexchange/57909-ribboncoloredz-m" rel="nofollow noreferrer">ribboncoloredZ.m</a>): </p>
<p><a href="https://i.sstatic.net/kDXF3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kDXF3.png" alt="ribbon plot"></a></p>
| 223
|
wavelet transform
|
3 Band Wavelet Transform In MATLAB
|
https://dsp.stackexchange.com/questions/9777/3-band-wavelet-transform-in-matlab
|
<p>I am currently working on an audio watermarking project in MATLAB. I currently have a code I am using to construct a nxn 3 Band Wavelet Transform matrix. However, when I try to construct a matrix that is larger in size, I get the error "Maximum variable size allowed by the program is exceeded" or "Out of Memory."</p>
<p>Therefore, I was wondering if someone would be able to tell me what may be the cause of this problem and if there is an exisiting code or toolbox that could construct a 3 Band Wavelet Transform Matrix better and more efficiently. Thank you!</p>
<p>The following is the code I am trying to use. I am trying to create a matrix that is the size 3^9 X 3^9 and 3^10 X 3^10. </p>
<pre><code>prompt = {'Enter Matrix Power: '};
dlg_title = 'Matrix Power';
answer = inputdlg(prompt,dlg_title);
n = str2num(answer{1}); %#ok<ST2NM>
A=zeros(3^(n-1),3^n);
v=[0.338386097283860 0.530836187013740 0.723286276743610 0.238964171905760 0.0465140821758900 -0.145936007553990 ];
w1=[-0.117377016134830 0.54433105395181 -0.0187057473531300 -0.699119564792890 -0.136082763487960 0.426954037816980 ];
w2=[0.403636868928920 -0.628539361054710 0.460604752521310 -0.403636868928920 -0.0785674201318500 0.246502028665230 ];
for j=1:3^(n-1)-1
for k=1:3^n;
if k>6+3*(j-1) || k<=3*(j-1)
A(j,k)=0;
else
A(j,k)=v(k-3*(j-1));
end
end
end
j=3^(n-1);
for k=1:3^n
if k<=3
A(j,k)=v(k+3);
elseif k<=3^n-3
A(j,k)=0;
else
A(j,k)=v(k-3*(j-1));
end
end
B=zeros(3^(n-1),3^n);
for j=1:3^(n-1)-1
for k=1:3^n
if k>6+3*(j-1) || k<=3*(j-1)
B(j,k)=0;
else
B(j,k)=w1(k-3*(j-1));
end
end
end
j=3^(n-1);
for k=1:3^n
if k<=3
B(j,k)=w1(k+3);
elseif k<=3^n-3
B(j,k)=0;
else
B(j,k)=w1(k-3*(j-1));
end
end
C=zeros(3^(n-1),3^n);
for j=1:3^(n-1)-1
for k=1:3^n
if k>6+3*(j-1) || k<=3*(j-1)
C(j,k)=0;
else
C(j,k)=w2(k-3*(j-1));
end
end
end
j=3^(n-1);
for k=1:3^n
if k<=3
C(j,k)=w2(k+3);
elseif k<=3^n-3
C(j,k)=0;
else
C(j,k)=w2(k-3*(j-1));
end
end
W=[A;B;C];
Q=zeros(3^n,3^n);
T=zeros(3^n,3^n);
</code></pre>
| 224
|
|
wavelet transform
|
Discrete wavelet transform
|
https://dsp.stackexchange.com/questions/29138/discrete-wavelet-transform
|
<p>I am unable to understand the <strong>discrete wavelet transform</strong> on images. I followed Robi Polikar's tutorial and got a brief idea about the theory. But I'm unable to understand w.r.t images.</p>
<p>Using Matlab's <code>ndwt2('chess.jpg', 2, 'haar')</code> function on the chess board , I obtained the other 7 images in the album. (Link to the album given in the end)</p>
<p><code>ndwt2()</code> returns a structure, whose member <code>dec</code> contains the <strong>approximation (A), horizontal (H) , vertical (V) and diagonal (D)</strong> details. </p>
<p>This is where I have problem. What does <strong>A, H, V and D details</strong> of the image mean?</p>
<p>Also, how come is 1st image in the album approximation of the chessboard (assuming approximation means a rough estimate of the image) ? It just has lines where the borders of chess boxes are there. How is that an approximation, or have I understood it wrong?</p>
<p><strong>EDIT</strong></p>
<p>I was doing the mistake of converting the datatype of image returned by <code>ndwt2</code> (which is the <code>double</code> datatype) to <code>uint8</code>.</p>
<p><a href="https://i.sstatic.net/SHajj.jpg" rel="nofollow noreferrer">these</a> are the images that I now get.</p>
<p>Why does H details image contains some V details also, though the V in H are noisy? Same for V, it also contains some H details.</p>
<p>This was not the case previously, there H had exclusively vertical lines, V had only horizontal details, D was blank.</p>
<p><a href="https://i.sstatic.net/sWLEE.jpg" rel="nofollow noreferrer">previous album</a></p>
|
<p>One can implement the standard discrete wavelet transform (DWT) on an image (<code>dwt2</code> in Matlab) with a series of filtering and decimation operations, on the rows and the columns. And the wavelet by itself results from the iteration at different levels. </p>
<p>Start with the Haar wavelet. In 1D, it can be implemented by a series of sums and differences ($[1,1]$ and $[1,-1]$ filter) on $2$-pixel sets. If you combine these two filters on the rows and the colums, you get $4$ possible combinations. They are illustrated in the four $2\times 2$ matrices.</p>
<p><a href="https://i.sstatic.net/5HbXk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5HbXk.png" alt="One level wavelet decomposition"></a></p>
<p>So take the small $2\times 2$ block from the hat (left image). If you sum the pixels on the rows and the columns, you globally sum all four pixels. That gives you the top-left cyan on the right image. If you repeat the process on all $2\times 2$ blocks, you get the top-left small image, often called the approximation at level 1 (A1). If you apply $[1,-1]$ on the rows, and sum the columns, you get the top-right small image. The difference along the rows detect some vertical details, as you can see (more or less vetical edges). It is called vertical details at level 1 (V1). Now, if you apply $[1,-1]$ on the columns, and sum the rows, you get the bottom-left small image, which detects
some more of less horizontal details, called H1. The bottom-left small image computes differences on both rows and columns. It is called D1 for diagonal, though it is not very precise, in general, at detecting $\pm 45°$ edges. </p>
<p>There, you have A, H, V and D. A standard DWT further decomposes the small top-left image. With the same reasoning, you get approximation, vertical, horizontal and "diagonal" details at a second level (the $2$ in your comand):A2, D2, H2, V2. </p>
<p>Since A1 has been decomposed, you are left with A2, V2, H2, D2, V1, H1 and D1, hence $7$ images. With the DWT, those images are smaller (by $4 \times 4$ and $2 \times 2$).</p>
<p>Here, we have taken separate original $2 \times 2$ blocks. Another word for it is decimation, by $2$, after each filter pass. </p>
<p>Wiht <code>ndwt2</code>, the main difference is that you do not decimate, it is like taking blocks with overlap. So, normally, the A, V, H, D images look about the same, except they have almost the same size as the original (up to border effects and extension).</p>
<p>I am surprised by the apprroximation you show. Indeed, I believe there is a read or format issue with it. Open it with an image editor, save it again an a new JPG or PNG seems to fix the problem.</p>
| 225
|
wavelet transform
|
Difference between a wavelet transform and a wavelet decomposition
|
https://dsp.stackexchange.com/questions/10675/difference-between-a-wavelet-transform-and-a-wavelet-decomposition
|
<p>I'm confused about the difference between a wavelet transform and a wavelet decomposition is. For example</p>
<pre><code>load woman
[cA1,cH1,cV1,cD1] = dwt2(X,'db1');
[c,s] = wavedec2(X,2,'db1');
</code></pre>
<p>What's the difference between these two matlab commands, and when would you want to do one over the other?</p>
|
<p>I don't think there is any difference. The documentation for <a href="http://www.mathworks.com/help/wavelet/ref/dwt2.html" rel="noreferrer">dwt2</a> says</p>
<blockquote>
<p>Single-level discrete 2-D wavelet transform</p>
<p>The dwt2 command performs a single-level two-dimensional wavelet decomposition...</p>
</blockquote>
<p>While the documentation for <a href="http://www.mathworks.com/help/wavelet/ref/wavedec2.html" rel="noreferrer">wavedec2</a> says </p>
<blockquote>
<p>Multilevel 2-D wavelet decomposition</p>
</blockquote>
<p>The difference is that <code>dwt2</code> is single-level (produces a single A, H, V, D output):</p>
<p><a href="http://www.mathworks.com/help/wavelet/ref/dwt2.html" rel="noreferrer"><img src="https://i.sstatic.net/gCbV6.gif" alt="enter image description here"></a></p>
<p>and <code>wavedec2</code> is multilevel (produces array C output, which contains multiple A, H, V, D inside it):</p>
<p><a href="http://www.mathworks.com/matlabcentral/fileexchange/27375-plot-wavelet-image-2d-decomposition" rel="noreferrer"><img src="https://i.sstatic.net/dondL.png" alt="enter image description here"></a></p>
| 226
|
wavelet transform
|
Difference between "Discrete Wavelet Transform" and "Discrete Wavelet Decomposition"
|
https://dsp.stackexchange.com/questions/59382/difference-between-discrete-wavelet-transform-and-discrete-wavelet-decomposit
|
<p>I have a rough overview on Discrete Wavelet Transform (DWT). However, I am confused about Discrete Wavelet <em>Decomposition</em> and did not find a good reference yet which explains this well. What is it actually about? Is it somehow part of DWT or an inverse operation to it?</p>
|
<p>The <strong>discrete wavelet transform</strong> should denote "the operations" that, applied to some data, yield a <strong>discrete wavelet decomposition</strong>. The first one can be seen as a matrix operator, while the second relates to the actual wavelet coefficients, or the structure of thereof, that you would obtain after the application of the first one.</p>
<p>In everyday language, they are often used interchangeably, by a form of <a href="https://en.wikipedia.org/wiki/Metonymy" rel="nofollow noreferrer">metonymy</a>:</p>
<blockquote>
<p>a figure of speech in which a thing or concept is referred to by the
name of something closely associated with that thing or concept</p>
</blockquote>
| 227
|
wavelet transform
|
Intuition behind the Continuous Wavelet Transform?
|
https://dsp.stackexchange.com/questions/15662/intuition-behind-the-continuous-wavelet-transform
|
<p>I was thinking sometime back about how to explain the Continuous Wavelet Transform ELI5. So this is what I came across.</p>
<p>The correlation of two exact signals is 1. So if I have an input signal $f(x)$ made up of an array of frequencies, how can I find out what frequencies exist at what points? Well, slide a signal $m(y)$ where $-\infty > y > +\infty$ over $f(x)$, and at those points where the co-relation of these signals is 1, well, those frequencies are present, at those times. This if of course a Continuous Wavelet Transform. Am I correct?</p>
|
<p>I think that the best way to explain the CWT is to start by explaining the Fourier Transform, then move on to explaining the Short-Time Fourier Transform, and then finally explain the CWT as a variation of the STFT.</p>
<p>The Fourier Transform exploits the fact that any decently behaved function can be represented as a sum of sinusoids (i.e. a Fourier series) and that the sinusoid basis posses the property of orthogonality:</p>
<p>$$ \int_{-\infty}^{\infty} \sin(nx)\,\sin(mx)\,dx = \int_{-\infty}^{\infty} \cos(nx)\,\cos(mx)\,dx = \begin{cases}
1, & \text{if $n=m$} \\
0, & \text{if $n\neq m$}
\end{cases}$$</p>
<p>So, since:</p>
<p>$$ e^{iax} = \cos(ax)+i\,\sin(ax)$$</p>
<p>the Fourier Transform is simply doing this integration for all frequencies and keeping track of which outputs are zero (i.e. that frequency is not in the signal) and which are non-zeros (i.e. that frequency is in the data and its output is scaled by how much of it is in there):</p>
<p>$$ S(f) = \int_{-\infty}^{\infty} s(t)\,e^{i2\pi ft}dt $$</p>
<p>In this case you are doing this integration over the entire signal so you can't really tell if the frequency content is changing from the beginning of the signal to the end. One way around this is to compute the Short-Time Fourier Transform: i.e. window the signal, calculate the Fourier transform of the windowed signal, store it, then shift the window down a bit and repeat for all shifts:</p>
<p>$$ S(\tau,f) = \int_{-\infty}^{\infty} w(t- \tau)\,s(t)\,e^{i2\pi ft}dt $$</p>
<p>where</p>
<p>$$ w(\tau,t)=\begin{cases}
1, & \text{if $\tau\approx t$} \\
0, & \text{if $\tau \not\approx t$}
\end{cases} $$</p>
<p>The key thing here is that you are calculating the typical Fourier transform but of a new signal that only exists in a localized part of the t-axis. To emphasize this, you can see the new signal whose Fourier transform we are calculating by associating:</p>
<p>$$ S(\tau,f) = \int_{-\infty}^{\infty} [w(t-\tau)\,s(t)]\,e^{i2\pi ft}dt $$</p>
<p>And here's a graphical example of this showing the new signal for different $\tau$ values and on the right are a few sinusoids to represent what we are using to decompose the signals (i.e. our basis, or kernel).</p>
<p><a href="https://i.sstatic.net/X3fzf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X3fzf.png" alt="enter image description here"></a></p>
<p>But, we can also change the association as such without changing the outcome:</p>
<p>$$ S(\tau,f) = \int_{-\infty}^{\infty} s(t)\,[w(t-\tau)\,e^{i2\pi ft}]dt $$</p>
<p>So this means that instead of windowing our signal, we are windowing our basis functions. But here's the kicker, if we are windowing our basis functions, we don't have to use a constant-size window since we know that a basis function of high frequency will need a shorter window than a basis function of low frequency. This is the whole point of the CWT. It is a decomposition of a signal by "wavelets" (i.e. windowed sinusoids in this case) where the windowing is adaptive to the sinusoid frequency. If the we choose a Gaussian window (as I have chosen in these examples), then our wavelets are called Morlet wavelets (or Gabor wavelets, in some literature).</p>
<p><a href="https://i.sstatic.net/YzoO9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YzoO9.png" alt="CWT"></a></p>
<p>Finally, you can generalize this for any choice of wavelet that you want. In that generalization, you can describe your wavelet basis functions as being "stretched" and "squeezed" versions of some arbitrary "mother" wavelet, $ \psi $. And so the previous equation can now be written as the final form of the CWT:</p>
<p>$$ S(a,b) = \frac{1}{\sqrt b}\int_{-\infty}^{\infty} s(t)\,\psi (\frac{t-a}{b})dt $$</p>
<p>where $a$ is now what used to be our $\tau$ (i.e. a time shift), and $b$ is called the "scale" which is just a parameter to stretch and squeeze the wavelet (similar to what our parameter $f$ was except now the interpretation is more difficult). And the only reason you have the $\frac{1}{\sqrt b}$ up front is to normalize the wavelets so that they have the same "energy" and you end up comparing apples to apples in your time-frequency representation.</p>
<p>I hope this helps!</p>
<p>-Antonio</p>
| 228
|
wavelet transform
|
Reading the Wavelet transform plot
|
https://dsp.stackexchange.com/questions/7911/reading-the-wavelet-transform-plot
|
<p>I am having trouble understanding on how to read the plot plotted by a wavelet transform,</p>
<p>here is my simple Matlab code,</p>
<pre><code>load noissin;
% c is a 48-by-1000 matrix, each row
% of which corresponds to a single scale.
c = cwt(noissin,1:48,'db4','plot');
</code></pre>
<p><img src="https://i.sstatic.net/Jmu3N.jpg" alt="enter image description here"></p>
<p>So the brightest part means the scaling coffiecient size is bigger, but how exactly i can understand this plot what is happening there ?
Kindly help me.</p>
|
<p>This is the example that i think is the best to understand Wavelet plot.</p>
<p>Have a look at the image below,
The Waveform (A) is our original Signal, Waveform (B) shows a Daubechies 20 (Db20) wavelet about 1/8 second long that starts at the beginning (t = 0) and effectively ends well before 1/4 second. The zero values are extended to the full 1 second. The point-by-point comparison* with our pulse signal (A) will be very poor and we will obtain a very small correlation value.</p>
<p>we first shift the unstretched basic or mother wavelet slightly to the right and perform another comparison of the signal with this new waveform to get another correlation value. We continue to shift and when the Db20 wavelet is in the position shown in (C) we get a little better comparison than with (B), but still very poor because (C) and (A) are different frequencies.</p>
<p>After we have continued shifting the wavelet all the way to the end of the 1 second time interval, we start over with a slightly stretched wavelet at the beginning and repeatedly shift to the right to obtain another full set of these correlation values. Waveform (D) shows the Db20 wavelet stretched to where the frequency is roughly the same as the pulse (A) and shifted to the right until the peaks and valleys line up fairly well. At these particular amounts of shifting and stretching we should obtain a very good comparison and a large correlation value.
Further shifting to the right, however, even at this same stretching will yield increasingly poor correlations. Further stretching doesn't help at all because even when lined up, the pulse and the over-stretched wavelet won’t be the same frequency.</p>
<p><img src="https://i.sstatic.net/gxiJq.png" alt="enter image description here"></p>
<p>In the CWT we have one correlation value for every shift of every stretched
wavelet.† To show the correlation values (quality of the “match”) for all these stretches and shifts, we use a 3-D display. </p>
<p>Here it goes,</p>
<p><img src="https://i.sstatic.net/5wZzs.png" alt="enter image description here"></p>
<p>The bright spots indicate where the peaks and valleys of the stretched and shifted wavelet align best with the peaks and valleys of the embedded pulse (dark when no alignment, dimmer where only some peaks and valleys line up, but brightest where all the peaks and valleys align). In this simple example, stretching the wavelet by a factor of 2 from 40 to 20 Hz (stretching the filter from the original 20 points to 40 points) and shifting it 3/8 second in time gave the best correlation and agrees with what we knew a priori or “up front” about the pulse (pulse centered at 3/8 second, pulse frequency 20 Hz). </p>
<p>We chose the Db20 wavelet because it looks a little like the pulse signal. If we didn’t know a priori what the event looked like we could try several wavelets (easily switched in software) to see which produced a CWT display with the brightest spots (indicating best correlation). This would tell us something about the shape of the event.</p>
<p>For the simple tutorial example above we could have just visually discerned the location and frequency of the pulse (A). The next example is a little more representative of wavelets in the real world where location and frequency are not visible to the naked eye. </p>
<p>See the example below,</p>
<p><img src="https://i.sstatic.net/uQFbm.png" alt="enter image description here"></p>
<p>Wavelets can be used to analyze local events. We construct a 300 point slowly varying sine wave signal and add a tiny “glitch” or discontinuity (in slope) at time = 180. We would not notice the glitch unless we were looking at the closeup (b).</p>
<p>Now lets see how FFT will display this Glitch, have a look,
<img src="https://i.sstatic.net/XtbSK.png" alt="enter image description here"></p>
<p>The low frequency of the sine wave is easy to notice, but the small glitch cannot be seen.</p>
<p>But if we use CWT instead of FFT it will clearly display that glitch,
<img src="https://i.sstatic.net/HpJ0p.png" alt="enter image description here"></p>
<p>As you can see CWT wavelet display clearly shows a vertical line at time = 180 and at low scales. (The wavelet has very little stretching at low scales, indicating that the glitch was very short.) The CWT also compares well to the large oscillating sine wave which hides the glitch. At these higher scales the wavelet has been stretched (to a lower frequency) and thus “finds” the peak and the valley of the sine wave to be at time = 75 and 225, For this short discontinuity we used a short 4-point Db4 wavelet (as shown) for best comparison. </p>
| 229
|
wavelet transform
|
Implementing wavelet transform for finding transients in the power supply
|
https://dsp.stackexchange.com/questions/28018/implementing-wavelet-transform-for-finding-transients-in-the-power-supply
|
<p>I am new to the concept of wavelet transforms. Can somebody please help me in understanding this ? and also how to implement it in c. Is Short term Fourier transform more efficient than Wavelet Transform for finding Transients ?</p>
|
<p>I would say that a matching Mother wavelet could be the best for detecting a transient; but both the selection and implementation would be much slower. The old adage: do you want quantity or quality :) or Bandwidth vs. noise. Life doesn't come in our neat intellectual packets.
BTW: the simplest technique is an isolation capacitor and a threshold detector (sad to say I know a lot of ways to "cheat" ).</p>
<p>Evaluation process:</p>
<p>I would look to octave or scilab for a simulator to use.<br>
Construct a model of transients you think are likely or that you are looking for.<br>
Pass the transients through various canned analysis routines and then
Find the transients in the output; the crisper the "find" the more applicable the analysis is.</p>
<p>Reconstruct the input and see how good the reconstruction is; i.e. apply an error criterion to achieve a faithfulness measure.</p>
<p>If this seems tedious it's because it can be: but at each point in the process you get an idea about the effort involved.</p>
<p>Selection time</p>
<p>Design time and complexity</p>
<p>Execution time</p>
<p>Accuracy.</p>
<p>And most importantly:</p>
<p>Knowledge!</p>
<p>Like the wavelet theory: start coarsely/crudely and refine the evaluation process. </p>
<p>I looked on google and found:
----<a href="http://www.soest.hawaii.edu/MET/Faculty/bwang/bw/paper/wang45.pdf" rel="nofollow noreferrer">http://www.soest.hawaii.edu/MET/Faculty/bwang/bw/paper/wang45.pdf</a>
A real application of Waveform/Wavelet analysis of El nino and such; with all the hair:
--<a href="https://inst.eecs.berkeley.edu/~ee225b/sp14/lectures/shorterm.pdf" rel="nofollow noreferrer">https://inst.eecs.berkeley.edu/~ee225b/sp14/lectures/shorterm.pdf</a>
Which has a description of the process and history.</p>
<p>--<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.506.6298&rep=rep1&type=pdf" rel="nofollow noreferrer">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.506.6298&rep=rep1&type=pdf</a>
Which discusses real usage ambiguity.</p>
<p>--<a href="https://dsp.stackexchange.com/questions/8079">Why Wavelet developed when we already had Short-time Fourier transform</a>
Which links to another discussion.</p>
<p>--<a href="https://en.wikipedia.org/wiki/Wavelet" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Wavelet</a></p>
<p>Back to question:
Most of the techniques are "complete" in the sense that the transforms are invertable and can recover the original input. In that sense all invertable transforms are equivalent. But like I said they isolate different features. Wavelet transforming is a mathematical process for utilizing Mother wavelets to isolate information in time and space. STFT is very specific whereas there are innumerable of "wavelet transforms" each having different sampling Mother wavelet shapes that are applied recursively to give many (time, freq) graphs; two dimensional pictures.
STFT and Gabor and such are (as far as I know) specific but closely correspond to normal fourier/z-transform analysis.</p>
<h2>There are a lot of c wavelet programs in the open source domain.</h2>
<p>An effort to clarify "disturbance" analysis.</p>
<p>First you have to be able to define what a disturbance is and then the allowable type one/type two results; i.e. false positives/negatives and then do the system design around this. </p>
<p>Let me list a few "disturbances"; not necessarily related to power supplies.</p>
<p>For a nuclear containment vessel we tried to "listen" for what amounts to tearing at 100khz-1Mhz with transducers. A time exponential increase in the power of the signal was taken as (and experimentally verified) an indication of a problem. </p>
<p>As far as power supplies go:
There are safety related issues such as over-voltage causing downstream failures. For these I always put in absolute hardware; zeners, surge suppressors. That I think will limit damage and fail-safe; shut things down. Do the thermal calculations with respect to the calories produced by hardware shunts. </p>
<p>Under-voltage is also a problem in some cases, as I found out by almost smashing a co-workers hand. Dropping the field voltage to a DC shunt drive motor causes it to speed up :) So "pulling the plug" at random is not a great idea; and a drop in field voltage should probably be acted on.</p>
<p>Other expected cases are parasitic resonances/oscillations occurring in the output power stage of supplies. These are typically produced by altered frequency response characteristics of bipolar output transistors during transitions. Typically these are real transients and not worrisome unless some downstream equipment is sensitive.</p>
<p>You have specification errors where for some reason, say a passing cyclotron, the system is substantially normal but some specified tolerance is breached. This can also happen slowly when equipment ages and drifts out of spec.</p>
<p>Of course in motor-generator situations transient signals might indicate mechanical wear.</p>
<p>Then you have fast raw supply transients that find their way through the regulators and occur at the output. That's the reason I use hardware suppresors/zeners that can absorb some over-voltage attempts without allow ing the disturbance through and recover easily.</p>
| 230
|
wavelet transform
|
Bandpass filter using wavelet transform
|
https://dsp.stackexchange.com/questions/36914/bandpass-filter-using-wavelet-transform
|
<p>I'm working on a speech recognition project. The first step of this project is to find phoneme in the speech signal. To do that, I found <a href="https://www-users.cs.york.ac.uk/~suresh/papers/PSOS.pdf" rel="nofollow noreferrer">this paper</a> that discusses about it.</p>
<p>In the paper, wavelets are used to visualise the signal in different frequency band. Here is my problem : </p>
<p>So far, I know how to decompose the speech signal using wavelet transform at different level (<code>wavedec</code> in MATLAB) But I don't know how to filter this signal. </p>
<p>With Fourier transform, a simple threshold on the FFT (focus on specific frequency band) will do the work. And as far as I understood, wavelet kinda work like Fourier so I guess it's working the same way. </p>
|
<p>If you refer to the documentation for <a href="https://fr.mathworks.com/help/wavelet/ref/wavedec.html" rel="nofollow noreferrer"><code>wavedec</code></a>, a signal $x$ is decomposed on one level into two sets of coefficients: $cA_1$ and $cD_1$. They correspond to a low-pass and a high-pass filter applied to $x$, following by a downsampling. As a result, if you reconstruct $x_1$ with <code>waverec</code> from $cA_1$ only (setting $cD_1$ to zero), and $x_2$ from $cD_1$ only (setting $cA_1$ to zero), $x_1$ will mostly correspond to the lower half of $x$ spectrum, and $x_2$ the upper half.</p>
<p>The same reasoning works on several levels: if your signal has a range of frequency in $[0\,,f]$, $cD_1$ gathers coefficients mostly from $[f/2\,,f]$, $cD_2$ gathers coefficients mostly from $[f/4\,,f/2]$, etc.</p>
<p>So for a sampling frequency of $44100$ Hertz, the bands would be:</p>
<ul>
<li>$cD_1$: $11025 \to 22050$ </li>
<li>$cD_2$: $5512.5\to 11025 $ </li>
<li>$cD_3$: $2756.25 \to 5512.5$ </li>
<li>$cD_4$: $1378.125 \to 2756.25$ </li>
<li>$cD_5$: $689.0625 \to 1378.125 $ </li>
</ul>
<p>If you want to filter out a frequency band, you can zero wavelet coefficient whose spectrum intersect that frequency band, and reconstruct the data. This is a form of thresholding in the wavelet domain. As you guessed, <strong>it can work in a way similar to Fourier</strong>.</p>
<p>Indeed, thresholding and shrinkage are very effective with wavelets, possibly more than with a Fourier transform, for denoising. In the wavelet domain, you can design the shrinkage to preserve specific time intervals, allow smooth transitions, etc.</p>
<p>But the wavelet filters are imperfect filters. And downsampling cause aliasing, causing a not-so-clean filtering. To perfect a pure band-pass filter, I would not recommend the DWT (discrete wavelet transform), unless the wavelet is of quite high order. </p>
| 231
|
wavelet transform
|
Wavelet transform in control systems
|
https://dsp.stackexchange.com/questions/18053/wavelet-transform-in-control-systems
|
<p>In control systems, the Laplace transform is often used to analyze the stability and the performance of <a href="http://en.wikipedia.org/wiki/LTI_system_theory" rel="nofollow">LTI system</a>. For instance, the LTI system is stable if and only if the <a href="http://en.wikipedia.org/wiki/Transfer_function" rel="nofollow">transfer function</a>, which is the quotient between the Laplace transform of the output of the system and the Laplace transform of the input, has all of its poles in the left half complex plane. </p>
<p>Does wavelet transforms also found applications in analysis or design of control systems?</p>
|
<p>In the paper <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6811177" rel="nofollow">Haar-Based Stability Analysis of LPV Systems</a>, the Haar wavelet transform theory have been used to design linear matrix inequalities (LMIs) to analyze the stability of <a href="https://en.wikipedia.org/wiki/Linear_parametric_varying_control" rel="nofollow">LPV systems</a>. </p>
<p>As the resolution level of the Haar wavelet increases, the number of variables and rows of the designed LMI increases and the feasibility of this LMI becomes a less conservative condition for the stability of the LPV system. Although the stability of LPV systems using LMIs that become less conservatives with the increase of variables and/or rows have already been proposed in the literature, the Haar-based approach can handle a larger class of parametric dependencies as well as non-convex parametric domains. </p>
| 232
|
wavelet transform
|
Why are wavelet transforms implemented in Python/Matlab often called Continuous wavelet transform when they take discrete-time input?
|
https://dsp.stackexchange.com/questions/86713/why-are-wavelet-transforms-implemented-in-python-matlab-often-called-continuous
|
<p>The implementations of Synchrosqueezing wavelet transform in Python (<a href="https://github.com/OverLordGoldDragon/ssqueezepy" rel="nofollow noreferrer">ssqueezepy</a>) and <a href="https://www.mathworks.com/help/wavelet/gs/wavelet-synchrosqueezing.html" rel="nofollow noreferrer">MATLAB</a> both write in their documentation that they implement the synchrosqueezing algorithm on the Continuous Wavelet Transform. However, these functions take in discrete-time input and return discrete frequency/scale output. It is my understanding that the name for the wavelet transform that takes in discrete time and outputs discrete scale is called the Distrete-Time Wavelet Transform (DTWT), see (<a href="https://scholarship.rice.edu/bitstream/handle/1911/112342/col11454-FINAL.pdf?sequence=1" rel="nofollow noreferrer">Wavelets & Wavelet Transforms by C. Sidney Burrus page 158</a> or page 200 <a href="https://archive.org/details/cnx-org-col11454/page/n199/mode/2up" rel="nofollow noreferrer">here</a>).</p>
<p>Why do these libraries use the term CWT to refer to their functions, and what are the equations used to handle discrete time inputs and discrete outputs?</p>
|
<p>Good question.</p>
<h2>From nomenclature standpoint</h2>
<p>Sampling a continuous-time result (called <em>discretization</em>) most often inherits the original name. For example, we still say "IIR filters", though they're surely finite on a computer.</p>
<p>The following are my observations that are <em>sometimes</em> applicable:</p>
<ul>
<li><strong>discrete</strong> is reserved for methods that are <em>designed</em> to work with finite sequences, and often enjoy exact properties. DFT and DWT are examples.</li>
<li><strong>discrete-time</strong>, as in DTFT, is a mix of continuous and infinite-discrete; the methods are defined over the entirety of input <span class="math-container">$x$</span>, even if we don't have it (as is the case in practice). It's subject to discretization, again without name change.</li>
</ul>
<h3>From 'meaning' standpoint</h3>
<p>If implemented properly, doing operations with finite sequences and sampling the continuous-time result produce the same result - for example, <a href="https://i.sstatic.net/115Md.png" rel="nofollow noreferrer">CWT of cosine</a>. A non-CWT example is, adding two continuous sines and sampling them, is same as adding two sampled continuous sines:</p>
<p><span class="math-container">$$
(\cos(\omega_0 t) + \cos(\omega_1 t))(n) = \cos(\omega_0 n) + \cos(\omega_1 n)
$$</span></p>
<p>With a caveat, it's also why we can say "integrating" on discrete sequences while doing a sum:</p>
<p><span class="math-container">$$
\int_{t=0}^1 |\cos(2\pi t)|^2 dt = \sum_{n=0}^{N - 1} |\cos(2 \pi n / N)|
$$</span></p>
<p>This caveat is aliasing, and it's also applicable to CWT, but that's more of a "sampled equations may behave unexpectedly", as these effects are certainly computable in continuous-time (e.g. <a href="https://www.wikiwand.com/en/Spectral_leakage" rel="nofollow noreferrer">spectral leakage</a>).</p>
<p>Another perspective is to consider the operations and domains involved: CFT is all continuous, DTFT continuous-discrete, DFT discrete-discrete. While these specifically are carefully related, it's not guaranteed to be the case following the above bolded definitions - "meaning" can change if a continuous kernel operates over a discrete sequence. I've not ascertained but discretized "DTWT" and discretized CWT might be identical.</p>
<p>Also, theory is done as CWT instead of "DTWT" as latter entails accounting for aliasing and possibly finiteness, which is extremely complicated and unnecessary for what can be predicted and handled in practice.</p>
<h3>Re: source</h3>
<p>I've checked after writing this answer - it mirrors some of my comments, also remarks on inconsistency (hence arbitrarity). It also gives an example per my "theory is done" paragraph,</p>
<blockquote>
<p>Because the wavelet basis functions are concentrated in time and not periodic, both the DTWT and DWT will represent in ̋nitely long signals.</p>
</blockquote>
<p>but also says</p>
<blockquote>
<p>in most practical cases, they are made periodic to facilitate efficient computation.</p>
</blockquote>
<p>This needs careful interpreting and wish it was presented differently: nothing is actually made periodic, it just refers to the continuous-time spectrum being <a href="https://dsp.stackexchange.com/a/74734/50076">periodized</a>, as is the case with all discrete sequences.</p>
<h3>Implementation CWT formula</h3>
<p>Fast CWT is implemented with FFT convolution, i.e. circular convolution, that in time domain writes:</p>
<p><span class="math-container">$$
Wf[n, s] = \sum_{m=0}^{N-1} f[m] \psi_s^{*}[m - n] = (f \circledast \bar{\psi_s})[n]
$$</span></p>
<p>where <span class="math-container">$\psi_s[n] = \frac{1}{s} \psi (n/s)$</span>, <span class="math-container">$\bar{\psi_s}[n] = \psi_s^*[-n]$</span>, and <span class="math-container">$s$</span> is scale. What's missing is, <span class="math-container">$f$</span> is usually <span class="math-container">$f_\text{padded}$</span> to avoid boundary effects and circular aliasing - and unpadding.</p>
<p>Example case where a discrete computation matches the sampling of a continuous result is <code>'reflect'</code>-padded pure sine, which is <strong>circularly continuous</strong> and equivalently infinite (per effectively vanishing wavelet support under float precision):</p>
<p><a href="https://i.sstatic.net/jHYCe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jHYCe.png" alt="enter image description here" /></a></p>
<p>which one can confirm matches linked "CWT of cosine".</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from ssqueezepy import cwt
from ssqueezepy.visuals import imshow
t = np.linspace(0, 1, 257, endpoint=True)
x = np.cos(2*np.pi * 5 * t)
Wx = cwt(x, padtype='reflect')[0]
imshow(Wx, abs=1, w=.7, h=.58)
plot(Wx[:, 0], complex=1, w=.7)
</code></pre>
| 233
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.