category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
Fourier transform
What is the frequency transfer function when $x(t) = e^{-3t}u(t)$ and $y(t) = 2u(t)[e^{-t}-e^{-4t}]$
https://dsp.stackexchange.com/questions/38018/what-is-the-frequency-transfer-function-when-xt-e-3tut-and-yt-2u
<p>What I did was take the fourier transform of both $x(t)$ and $y(t)$ and then divided $Y(j\omega)/X(j\omega)$. So $$Y(j\omega) = \frac{2}{1+j\omega}-\frac{2}{4+j\omega}\quad\text{and}\quad X(j\omega) = \frac{1}{3+j\omega}$$ </p> <p>However, the answer is of the form: $\displaystyle \frac{A_1}{1+j\omega}+\frac{A_2}{4+j\omega}$ where the $A$'s are constant. I am not getting the solution of this form. I got $Y(j\omega)$ in this form, but not $H(j\omega)$.</p> <p>Am I doing something wrong?</p>
<p>Your answer is correct; it's just in a different form than the given answer. In order to get your answer in the given form, do the following:</p> <ol> <li>rewrite $Y(j\omega)$ by combining the two terms: $Y(j\omega)=\displaystyle\frac{N(j\omega)}{(j\omega+1)(j\omega+4)}$, where $N(j\omega)$ is a (very simple) polynomial. I'm sure you know how to obtain $N(j\omega)$.</li> <li>write the frequency response as $\displaystyle H(j\omega)=\frac{N(j\omega)(j\omega+3)}{(j\omega+1)(j\omega+4)}$ and use partial fraction expansion to obtain the answer in the given form.</li> </ol> <p>Alternatively, you could also determine $h(t)$ via inverse Fourier transform of $H(j\omega)$ as you obtained it (and noting that multiplication by $j\omega$ corresponds to differentiation in the time domain). You will see that $h(t)$ is in the form $A_1e^{-t}u(t)+A_2e^{-4t}u(t)$, which directly leads to $H(j\omega)$ in the given form.</p>
34
Fourier transform
Significance of an impulse in the frequency domain
https://dsp.stackexchange.com/questions/44735/significance-of-an-impulse-in-the-frequency-domain
<p>I know that $X(f)$ gives the amplitude associated with the frequency component $f$ of a signal $x(t)$.</p> <p>Now, a sinusoidal signal in time $x(t) = A \cos (2 \pi f_0 t)$, has a Fourier transform $X(f) = \frac{A}{2}[\delta(f-f_0) + \delta(f+f_0) ]$.</p> <p>My question is that the Dirac Delta funcion tends to $\infty$ at $0$. Then, multiplying it by $A/2$ should also result $\infty$. If this was the case, then what is the significance of $X(f)$ for $x(t) = A \cos(2 \pi f_0 t)$.</p> <p><strong><em>More specifically what is the significance of the impulses (having infinite amplitudes) in the frequency domain?</em></strong></p> <p>Thank you. :)</p>
<p>The Dirac delta is not strictly a function but a distribution. The Dirac delta is such that $\delta(x)=0 \ \forall x\neq0$ and it has to meet the following restriction:</p> <p>$$\int_{-\infty}^{\infty} \delta(x) \ \mathrm{d}x = 1$$</p> <p>This means that the unit impulse must integrate $1$ over all the real numbers.</p> <p>Let's define this other "function" $$\tilde{\delta}(x)=2\delta(x)$$</p> <p>As we can easily see, this new "function" must integrate $2$:</p> <p>$$\int_{-\infty}^{\infty} \tilde{\delta}(x) \ \mathrm{d}x = 2\int_{-\infty}^{\infty} \delta(x) \ \mathrm{d}x = 2$$</p> <p>And this is what we get when multiplying an impulse by a constat number. The area under the "function" changes and, due to the Dirac delta being non-zero only at the origin, this means that the impulse <em>must</em> change its height (if we can call it so) in order to change the area it integrates.</p> <p>In the frequency domain, these constants act as weights that determine something like the relevance of a frequency in a given time-domain signal. If we have a small (i.e. it integrates a small area) impulse in a determined frequency, then there is a pure sinusoid of that frequency present in the signal whose amplitude is rather small. On the other hand, a large impulse would correspond to a pure sinusoid with a large amplitude.</p>
35
Fourier transform
Mean Square Error and Gibbs oscillations
https://dsp.stackexchange.com/questions/52088/mean-square-error-and-gibbs-oscillations
<p>While studying the convergence of Fourier transform, I got to know two conditions. </p> <ul> <li>$$\sum_{n=-\infty}^{\infty}|x(n)|&lt;\infty$$</li> <li>$$\sum|x(n)|^{2} \leq [\sum|x(n)|]^{2}$$</li> </ul> <p>While I was reading the text, I found this paragraph quite confusing. I didn't understood this. </p> <blockquote> <p>If a sequence is not absolutely summable but has finite energy, one may employ a type of convergence in which the series converges so the mean square error is 0. The "attendant Gibbs oscillations" at a discontinuity are of practical significance in filter design.</p> </blockquote> <p>I don't know what Gibbs oscillations mean here. So if someone could please explain the meaning of this paragraph (taken for DSP by Alan V. Oppenheim).</p>
<p>The first condition mentioned in your question is absolute summability, which is sufficient for the discrete-time Fourier transform (DTFT) to exist. In this case, the sum given by the DTFT of a sequence converges uniformly. The other condition you probably mean is square summability:</p> <p><span class="math-container">$$\sum_{n=-\infty}^{\infty}|x[n]|^2\lt\infty\tag{1}$$</span></p> <p>in which case the DTFT exists if we relax the condition of uniform convergence. That other type of convergence is called mean-square convergence. In this case, the result of the sum oscillates around points of discontinuities, and those oscillations do not decrease with increasing number of elements in the sum. You can find a more in-depth explanation of this so-called Gibbs phenomenon in <a href="https://en.wikipedia.org/wiki/Gibbs_phenomenon" rel="nofollow noreferrer">this article</a>.</p> <p>The importance of the Gibbs phenomenon in digital filter design is that you might want to approximate an ideal frequency response (with discontinuities) by a filter of finite length (finite impulse response, FIR), and this results in large errors close to the discontinuities. This problem can be alleviated by introducing transition bands and/or by (non-rectangular) windowing instead of simple truncation of the sum.</p>
36
Fourier transform
Is the following property true?
https://dsp.stackexchange.com/questions/53581/is-the-following-property-true
<p>I was looking at a solution of a Fourier Transform question and following property was used, if: <span class="math-container">$$ x(t)\rightarrow X(jw) $$</span> then:</p> <p><span class="math-container">$$ e^{jw_ot}x(t)\rightarrow X(j(w-w_0)) $$</span> <span class="math-container">$$ x(t)\sin(w_0t)\rightarrow \frac{1}{2j}X(j(w-w_0)) - \frac{1}{2j}X(j(w+w_0)) $$</span> If the above statements are true, can we say that for cos:</p> <p><span class="math-container">$$ x(t)\cos(w_0t)\rightarrow \frac{1}{2}X(j(w-w_0)) - \frac{1}{2}X(j(w-w_0)) $$</span></p>
<p>For <span class="math-container">$\cos$</span>, assuming <span class="math-container">$\omega_0$</span> is real, <a href="https://en.wikipedia.org/wiki/Fourier_transform#Functional_relationships,_one-dimensional" rel="nofollow noreferrer">the identity is</a>: <span class="math-container">$$ x(t) \cos(\omega_0)t = \frac{1}{2} X(j(\omega - \omega_0)) + \frac{1}{2} X(j(\omega + \omega_0)) $$</span></p> <p>This is because <span class="math-container">$$ \cos(\omega_0 t) = \frac{1}{2}e^{j \omega_0 t} + \frac{1}{2}e^{-j \omega_0 t} $$</span></p> <p>Use this expression with your first identity and the superposition property of the Fourier transform to arrive at this result. </p> <p>As an aside, also note that <span class="math-container">$$ \sin(\omega_0 t) = \frac{1}{2j}e^{j \omega_0 t} - \frac{1}{2j}e^{-j \omega_0 t} $$</span> By the same reasoning, this is how you arrive at your second identity. </p>
37
Fourier transform
What is a correct way to find or &quot;guess&quot; a kernel which transforms an image into another image using Fourier Transformations?
https://dsp.stackexchange.com/questions/55468/what-is-a-correct-way-to-find-or-guess-a-kernel-which-transforms-an-image-into
<p>Assuming I have two images, apple and orange; also assuming a filter kernel that transforms an apple image into an orange image possibly exists, how would some series of Fourier Transformations (and other spectral operations) get me a filter kernel? Is this possible? If possible, can it be immune to rotations/scaling/translation in spatial domain? (apple vs rotated apple)</p> <p>In another way, if orange is</p> <pre><code>IFT(FT(apple) * FT(filter)) </code></pre> <p>then how can filter be found using only apple and orange? If it is something like</p> <pre><code>filter = IFT(FT(apple) @@ FT(orange)) </code></pre> <p>then what could <code>@@</code> be? Is this possible?</p> <p>Side question: if this is possible, are we able to extract "similarness" of an apple and an orange, just by looking at the result kernel form <code>@@</code> operation? I mean, if kernel has only 1 on center and 0 on all other parts, this would be totally equal (both are apple or both are oranges) but what about other cases? Something like root mean squares of all kernel points?</p>
38
Fourier transform
Applying frequency-domain filters on a centered Fourier transform
https://dsp.stackexchange.com/questions/56160/applying-frequency-domain-filters-on-a-centered-fourier-transform
<p>I understand why we shift the Fourier transform such that the 0-frequency is centered for visualization. In the shifted DFT(u,v) of an M*N 2-dimensional image,</p> <ul> <li>the top-left corner of the 4th quadrant is (0,0) frequency or (low u, low v)</li> <li>the bottom-left corner of the 1st quadrant, (M-1,0) or (high u, low v);</li> <li>the bottom-right corner of the 2nd quadrant, (M-1,N-1) or (high u, high v); and</li> <li>the top-right corner of the 3rd quadrant, (0,N-1) or (low u, high v).</li> </ul> <p>Now, when we apply filters (centered again) on the centered DFT, aren't we changing both low and high frequencies closer/further to/from the center?</p> <p>For example, the centered Gaussian high-pass filter is centered at M/2 and N/2, and supposed to attenuate only low frequencies (is it?). However, applying this filter to the shifted DFT will attenuate not only low frequencies, but also high frequencies in the 1st, 2nd, and 3rd quadrants.</p> <p>I did a little experiment and can confirm this effect. Applying the centered Gaussian high-pass filter to a centered DFT was not equivalent to applying the non-centered Gaussian high-pass filter to the same non-centered DFT. I had to apply the filter three more times at each corner of the non-centered DFT to get the same result.</p> <p>I couldn't find any good explanations why this (high-pass filter changing high frequencies around the center or low-pass filter changing low frequencies) is acceptable.</p> <p>To simplify this question, let's take a one-dimensional image as example. The non-shifted DFT has the 0-frequency on the left edge while the M-1 frequency on the right edge. The shifted DFT has the 0-frequency at the center and the M-1 frequency right next to it on the left side. Attenuating this center in effect changes both the 0 and M-1 frequencies. This is different from attenuating the 0-frequency on the non-shifted DFT.</p> <p>Thanks in advance!</p>
<p>Note that an FFT of strictly real data is conjugate symmetric. </p> <p>For a length M FFT of strictly real data, the data in FFT result bin M-1 has the same magnitude as in bin 1 (for a low frequency), but complex conjugated. So to maintain symmetry after filtering, which is necessary for the IFFT-ed image to remain strictly real, a filter has to modify bin(M-i) by the same ratio as bin(i).</p>
39
Fourier transform
Fourier transform and anti-trasform--identity missing
https://dsp.stackexchange.com/questions/59436/fourier-transform-and-anti-trasform-identity-missing
<p>I have a very silly doubt:</p> <p>If we define the power spectral density:</p> <p>S(f)=<span class="math-container">$\frac{1}{2\pi}\int exp(-i\tau2\pi f)r(\tau)d\tau$</span> (1)</p> <p>where <span class="math-container">$r(\tau)$</span> is the correlation coefficient.</p> <p>If we do the Fourier anti-transform, we obtain <span class="math-container">$r(\tau)=\int exp(i\tau2\pi f)S(f)df$</span> (2) </p> <p>Now my doubt is: if I substitute in the second equation the first equation, it seems to me that I don't find the identity <span class="math-container">$r(\tau)=r(\tau)$</span></p> <p>I hope you can help me, maybe I am missing something</p>
<p>Different ways of showing it, depends where you start. Are you willing to accept that the Fourier transform of <span class="math-container">$\delta(t)$</span> is <span class="math-container">$1$</span> and vice versa (i.e., <span class="math-container">$\int_{-\infty}^\infty {\color{red}1} \cdot {\rm e}^{\jmath 2\pi f t} = \delta(t)$</span>)? If so, it's easy:</p> <p><span class="math-container">$$\begin{align} r(\tau) &amp; = \int_{-\infty}^\infty {\rm e}^{\jmath 2\pi f \tau} S(f) {\rm d}f \\ &amp; = \int_{-\infty}^\infty {\rm e}^{\jmath 2\pi f \tau} \int_{-\infty}^\infty {\rm e}^{-\jmath 2\pi f t} r(t) {\rm d}t {\rm d}f \\ &amp; = \int_{-\infty}^\infty \int_{-\infty}^\infty {\rm e}^{\jmath 2\pi f (\tau-t)} r(t) {\rm d}t {\rm d}f \\ &amp; = \int_{-\infty}^\infty r(t) \int_{-\infty}^\infty {\color{red}1} \cdot {\rm e}^{\jmath 2\pi f (\tau-t)} {\rm d}f {\rm d} t \\ &amp; = \int_{-\infty}^\infty r(t) \delta(\tau-t) {\rm d} t \\ &amp; = \int_{-\infty}^\infty r(\tau) \delta(\tau-t) {\rm d} t \\ &amp; = r(\tau) \int_{-\infty}^\infty \delta(\tau-t) {\rm d} t \\ &amp; = r(\tau) \end{align}$$</span></p> <ul> <li>Step 1: Insert (2) into (1). Note that the inner integration variable is a new one, different from <span class="math-container">$\tau$</span>. I call it <span class="math-container">$t$</span>.</li> <li>Step 2: Pulling the first exp inside the integral, combining the exps.</li> <li>Step 3: Changing integration order (PSD and ACF are absolutely integrable), pulling out what does not depend on the inner integration variable <span class="math-container">$f$</span>.</li> <li>Step 4: Using the fact that the inverse Fourier of a constant is a delta (think <span class="math-container">$\tau-t$</span> as one variable here, then it's the inverse Fourier of 1).</li> <li>Step 5: Using the sifting property of the delta.</li> <li>Step 6: Moving out the constant term</li> <li>Step 7: Area under the delta is one.</li> </ul> <p>Of course, step 4 is the critical one. If you don't buy it, you need a different, more fundamental/mathematical approach. This is more the engineering point of view I'm presenting here.</p> <p>Regarding your reply with <span class="math-container">$\tau$</span> vs. <span class="math-container">$t$</span>: What you say is not true. See, we're computing <span class="math-container">$r(\tau)$</span> via <span class="math-container">$\int_{-\infty}^\infty {\rm e}^{\jmath 2\pi f\tau} S(f) {\rm d}f$</span>, which means it's an integral over frequency and the integration kernel depends on <span class="math-container">$\tau$</span>. The function <span class="math-container">$S(f)$</span> does not depend on <span class="math-container">$\tau$</span> of course. Now, you are replacing <span class="math-container">$S(f)$</span> by the inverse Fourier transform of the autocorrelation, which is an integral over time. But it's a different time variable, which I called <span class="math-container">$t$</span>. It must be, since it if were <span class="math-container">$\tau$</span> it would mean that <span class="math-container">$S(f)$</span> somehow depends on <span class="math-container">$\tau$</span>. </p> <p>Another way to think about it: The variable <span class="math-container">$\tau$</span> is the independent one. Don't forget that all our integrals are definite ones (from <span class="math-container">$-\infty$</span> to <span class="math-container">$\infty$</span>). We sometimes drop that for laziness, but I added them now to be more clear. Now, the integration variables on the right-hand side are the ones we integrate over, hence the result cannot depend on it. Imagine something like <span class="math-container">$x(\tau) = \int_{-\infty}^\infty \int_{-\infty}^\infty X(f) g(\tau) {\rm d}f {\rm d}\tau$</span>. This does not make sense as the right-hand side integrates over <span class="math-container">$f$</span> and <span class="math-container">$\tau$</span> (the result is a number) whereas the right-hand side depends on <span class="math-container">$\tau$</span>. This is why we need a new time variable.</p>
40
Fourier transform
If the cosine function is periodic, why does it have a Fourier Transform?
https://dsp.stackexchange.com/questions/60763/if-the-cosine-function-is-periodic-why-does-it-have-a-fourier-transform
<p>As far as I understand Fourier Transforms are for non-periodic signals and Fourier Series for periodic signals.</p> <p>So why is it we can take the Fourier Transform of a cosine when it is a periodic function, assuming the above paragraph is correct?</p>
<p>Indeed there are two things you have to know.</p> <p>First, it can be shown that the continuous-time Fourier transform can be obtained from the continuous-time Fourier series by letting the period <span class="math-container">$T$</span> go to infinity. </p> <p>Second, formally speaking the Fourier transform integral for periodic signals do not converge, hence do not exist. The solution is a generalisation of the Fourier transform by the use of Dirac impulse functions. </p> <p>The result is an interpretation that the Fourier transform of periodic functions is a sum of scaled Dirac impulses at the Fourier harmonic frequencies, and the scale being the corresponding Fourier series coefficients.</p>
41
Fourier transform
Why is a circular mask appropriate for Fourier filtering rectangular images?
https://dsp.stackexchange.com/questions/61184/why-is-a-circular-mask-appropriate-for-fourier-filtering-rectangular-images
<p>Suppose I apply 2D DFT to an image with dimensions <span class="math-container">$H{\times}W$</span> where <span class="math-container">$H \neq W$</span>, then shift the DC component to the center. Why does a circular mask capture the lowest frequency components, i.e. why is it not an ellipse given that the image is rectangular? My concern is that for rectangular images, the K lowest frequencies might be arranged in a non-circular pattern.</p>
<p>For simplicity, let's not do any shifting and only consider non-negative frequencies.</p> <p>Let's assume that the horizontal and vertical image dimensions are even integers <span class="math-container">$W$</span> and <span class="math-container">$H$</span>. Looking at the output of a <span class="math-container">$H\times W$</span> 2-d DFT of the image, the <span class="math-container">$u$</span>th column with <span class="math-container">$u\le W/2$</span> represents a horizontal frequency of <span class="math-container">$u/W$</span> times the horizontal sampling frequency and the <span class="math-container">$v$</span>th row with <span class="math-container">$v\le H/2$</span> represents a vertical frequency of <span class="math-container">$v/H$</span> times the vertical sampling frequency. For a square image grid the horizontal and vertical sampling frequencies are equal and in the following denoted by a single variable <span class="math-container">$f_s$</span>. The frequency-magnitude of a 2-d frequency at bin <span class="math-container">$u, v$</span> will then be <span class="math-container">$\sqrt{(u/W)^2 + (v/H)^2}f_s$</span>.</p> <p>For a cut-off frequency <span class="math-container">$f_c$</span> your mask would select frequencies:</p> <p><span class="math-container">$$\sqrt{\left(\frac{u}{W}\right)^2 + \left(\frac{v}{H}\right)^2}f_s &lt; f_c\tag{1}$$</span> <span class="math-container">$$\Rightarrow\frac{f_s^2}{W^2f_c^2}u^2 + \frac{f_s^2}{H^2f_c^2}v^2 &lt; 1.\tag{2}$$</span></p> <p>That indeed defines an ellipse in coordinates <span class="math-container">$u, v$</span>.</p> <p>However, if you consider the actual frequencies <span class="math-container">$\frac{u}{W}f_s$</span>, <span class="math-container">$\frac{v}{H}f_s$</span> as coordinates, then what you have is a circle:</p> <p><span class="math-container">$$\text{Eq. 1}$$</span> <span class="math-container">$$\Rightarrow\frac{1}{f_c^2}\left(\frac{u}{W}f_s\right)^2 + \frac{1}{f_c^2}\left(\frac{v}{H}f_s\right)^2 &lt; 1.\tag{3}$$</span></p> <p>To summarize, it depends on how you express your frequencies.</p>
42
Fourier transform
Fourier antitransform using scaling property?
https://dsp.stackexchange.com/questions/62520/fourier-antitransform-using-scaling-property
<p>I'm trying to calculate the antitransform of:</p> <p><span class="math-container">$\frac{1}{2\cdot(1+5w)^2}$</span></p> <p>Now I know the antitransform of <span class="math-container">$\frac{1}{(1+5w)^2} = t \cdot e^{-5t} u(t) $</span></p> <p>But in this case I got that divided by 2. I assumed I had to use the scaling property which says:</p> <p><span class="math-container">$F[f(ax)] = \frac{1}{|a|} \hat{f}(\frac{w}{a})$</span></p> <p>Now I'm not really sure how to apply this. Could anyone help?</p>
<p>If <span class="math-container">$h(t)$</span> is the inverse Fourier transform of <span class="math-container">$H(\omega)$</span>, then by linearity the inverse Fourier transform of <span class="math-container">$aH(\omega)$</span> is simply <span class="math-container">$ah(t)$</span>. This has nothing to do with the scaling property you mentioned, because the latter refers to the scaling of the <em>argument</em> of the function.</p>
43
Fourier transform
Fourier Transform of an acceleration signal containing engine orders
https://dsp.stackexchange.com/questions/63148/fourier-transform-of-an-acceleration-signal-containing-engine-orders
<p>I am trying to understand how to evaluate this equation in the context of acceleration data which contain engine orders</p> <p><span class="math-container">$a^{f_{e}^{crit}}(f)=\sum_{o}^{K}A^{o,f_{e}^{crit}}\mathscr{F}(cos(2\pi \cdot f_{e}^{crit} \cdot o \cdot t))$</span></p> <p><span class="math-container">$a^{f_{e}^{crit}}$</span> is the acceleration, <span class="math-container">$f_{e}^{crit}$</span> is the critical engine speed, <span class="math-container">$A^{f_{e}^{crit}}$</span> is the acceleration expressed as a complex number and <span class="math-container">$o=0.5,1,1.5,....$</span> are the engine orders</p> <p>My confusion arises when I try to understand how it is possible to sum the time histories of the engine orders and then apply a fourier transform to frequency domain. I am not actually sure is it possible to have a time history of an engine order...</p> <p>Any clarifications will be appreciated. Thanks!</p>
44
Fourier transform
Fourier transform of $\sum_{n=-\infty}^\infty(-1)^n\delta(t-nT_0)$
https://dsp.stackexchange.com/questions/63194/fourier-transform-of-sum-n-infty-infty-1n-deltat-nt-0
<p>Given <span class="math-container">$x(t)$</span> and <span class="math-container">$h(t)=\sum_{n=-\infty}^\infty(-1)^n\delta(t-nT_0)$</span>, I have to compute <span class="math-container">$Y(f)$</span>, where <span class="math-container">$y(t)=x(t)h(t)$</span>. I have thought about using that, in this case, <span class="math-container">$Y(f)=X(f)*H(f)$</span>. I know that <span class="math-container">$\mathscr{F}(\sum_{n=-\infty}^\infty\delta(t-nT_0))=T_0^{-1}\sum_{n=-\infty}^\infty\delta(t-nf_0)$</span>, but how can I deal with that <span class="math-container">$(-1)^n?$</span></p>
<p><strong>HINT:</strong></p> <p>Note that the given <span class="math-container">$h(t)$</span> can be written as</p> <p><span class="math-container">$$h(t)=g(t)-g(t-T_0)\tag{1}$$</span></p> <p>with some <span class="math-container">$g(t)$</span> the Fourier transform <span class="math-container">$G(f)$</span> of which you know. So from <span class="math-container">$(1)$</span> you then get</p> <p><span class="math-container">$$H(f)=G(f)\left(1-e^{-j2\pi fT_0}\right)\tag{2}$$</span></p>
45
Fourier transform
why does the spectral envelope of human speech not change w.r.t. pitch when taking a Fourier transform?
https://dsp.stackexchange.com/questions/63517/why-does-the-spectral-envelope-of-human-speech-not-change-w-r-t-pitch-when-taki
<p>In the context of speech recognition (recognizing individual speech sounds), the pitch of a certain person can change at different times. </p> <p>Excerpt from Statistical Signal Processing by Steven Kay: </p> <blockquote> <p>This is a natural variability due to the nature of human speech. The spectral envelope will not change with pitch <strong>since the Fourier transform of a periodic signal is a sampled version of the Fourier transform one one period of the signal.</strong></p> </blockquote> <p>What does the part in bold above mean? </p>
<p>The spectral envelope, which determines where the formant frequencies are, is determined by the shape of the mouth, tongue, lips, and nasal coupling. That is independent of the pitch, which is dependent on the tension or stiffness of the vocal cords.</p>
46
Fourier transform
Anti-Aliasing and the Fourier Transform, Gonzalez Digital Image Processing
https://dsp.stackexchange.com/questions/67554/anti-aliasing-and-the-fourier-transform-gonzalez-digital-image-processing
<p>In Gonzalez book Digital Image Processing, section 4.34 (third edition), he writes:</p> <blockquote> <p>Unfortunately, except for some special cases mentioned blow, aliasing is always present in sampled signals because, even if the original sampled function is band-limited, infinite frequency components are introduced the moment we limit the duration of the function, which we always have to do in practice.</p> <p>For example, suppose that we want to limit the duration of a band-limited function <span class="math-container">$f(t)$</span> (i.e. a function whose Fourier transform is non-zero only on a closed interval of range of frequencies), to an interval say <span class="math-container">$[0, T]$</span>. We can do this by multiplying <span class="math-container">$f(t)$</span> by the function</p> <p><span class="math-container">$h(t)= 1 $</span> if <span class="math-container">$t \in [0,T]$</span>, and is <span class="math-container">$0$</span> otherwise.</p> <p>Then from the convolution theorem we know that the transform of this product <span class="math-container">$h(t)f(t)$</span> is the convolution of the transforms of the functions. Even if the transform of <span class="math-container">$f(t)$</span> is band-limited, convolving it with <span class="math-container">$F(h(t))=H(\mu)$</span> will yield a result with frequency components that are infinite.</p> </blockquote> <p>This very last statement is what I am not sure about. If the Fourier transform of <span class="math-container">$f$</span> is band-limited, then outside of a closed interval, the transformed function will be <span class="math-container">$0$</span>, and so I am not sure for which frequency components the convolution of the transforms will have infinite frequency. Any insights appreciated.</p>
<p>Note that in order to obtain the Fourier transform of the windowed time domain signal <span class="math-container">$f(t)h(t)$</span>, you need to convolve the Fourier transforms of <span class="math-container">$f(t)$</span> and <span class="math-container">$h(t)$</span>. We know that the Fourier transform of <span class="math-container">$f(t)$</span> is zero outside some interval (because <span class="math-container">$f(t)$</span> is band-limited). However, since <span class="math-container">$h(t)$</span> is time-limited, i.e., it is zero outside some time interval, we know that its Fourier transform cannot be band-limited. For this specific example of a rectangular window <span class="math-container">$h(t)$</span>, we know that the corresponding Fourier transform is a sinc function. Convolving any function with a sinc function results in a function that extends from <span class="math-container">$-\infty$</span> to <span class="math-container">$+\infty$</span>. Consequently, the Fourier transform of the windowed signal <span class="math-container">$f(t)h(t)$</span> also extends from <span class="math-container">$-\infty$</span> to <span class="math-container">$+\infty$</span>. Consequently, the signal <span class="math-container">$f(t)h(t)$</span> cannot be band-limited.</p> <p>In sum, a band-limited signal cannot be time-limited, and a time-limited signal cannot be band-limited. Note, however, that you cannot conclude that a signal is band-limited if it is <em>not</em> time-limited and vice versa. There are signal that are neither time-limited nor band-limited.</p>
47
Fourier transform
Fourier Transform of Impulse Train
https://dsp.stackexchange.com/questions/34146/fourier-transform-of-impulse-train
<p>Why is the fourier transform of impulse train a impulse train? Is there a intuitive reason behind it?</p>
<p>Intuition can sometimes be misleading. But here are some ideas that might help one move towards creating a mental picture.</p> <p>An infinitely long pure sinewave in the time domain (consisting of just one frequency FT or DFT basis function) will be a single impulse in the frequency domain.</p> <p>Distort the sinewave a little, but leave the waveform perfectly periodic, and the impulse will be followed by an evenly spaced harmonic series. Usually the narrower and sharper the distortion (but keeping the waveform still perfectly periodic), the longer the harmonic series. What might be considered a limiting case of maximum distortion, the narrowest waveform with the sharpest edge will have the longest harmonic series. Or an infinitely long sine wave in the time domain maximally distorted into just an infinitely periodic impulse train, will produce a impulse followed by an infinitely long harmonic series, which looks a lot like another periodic impulse train.</p> <p>Make it an even (cosine) function, and all the impulses in the FT will be real and thus symmetric around 0. Add a DC offset to the distorted sine wave to complete the pulse train at 0.</p>
48
Fourier transform
Fourier transform as the integral of a parameter multiplied by an homogeneous wave
https://dsp.stackexchange.com/questions/73087/fourier-transform-as-the-integral-of-a-parameter-multiplied-by-an-homogeneous-wa
<p>Can a Fourier transform in space be interpreted as the integral of a parameter multiplied by an homogeneous wave <span class="math-container">$\sigma$</span>?</p> <p>where <span class="math-container">$\sigma$</span> is:</p> <p><span class="math-container">$\sigma$</span>=<span class="math-container">$e^{-ikx}$</span></p> <p>Are there papers or book that illustrates this interpretation?</p>
<p>given your definition, and the definition of the Fourier transfer:</p> <p>yes. That is literally the definition:</p> <p><span class="math-container">$$\int_{\mathbb R} s(x) e^{i2\pi f x} \mathrm dx$$</span></p> <p>By comparison, with <span class="math-container">$k= 2\pi f$</span>, you get the Fourier transform. Every book uses that definition, so you'll find this &quot;interpretation&quot; in every book that introduces the Fourier transform. (This really feels like exactly the same <span class="math-container">$k$</span>-space / location transform I learned in my first solid state physics lecture, so I assume every modern solid state course for physicists does pretty much that, but more in-depth.)</p>
49
Fourier transform
Techniques to deriving DTFTs
https://dsp.stackexchange.com/questions/3369/techniques-to-deriving-dtfts
<p>Are there general techniques to derive DTFTs? Given a bandlimited function $x(t)$, how do I find</p> <p>$$X(\omega)=\sum_{n=-\infty}^\infty x[n]e^{-i\omega n}$$</p> <p>Generally, it is easier to derive the continuous transform (never mind the constants):</p> <p>$$X(f)=\int_{-\infty}^{\infty}x(t)e^{-i \omega t}\, dt$$</p> <p>because we have a wealth of integration theory to fall back on. Ok, one can also calculate $X(\omega)$ from $X(f)$ by <a href="http://en.wikipedia.org/wiki/Discrete-time_Fourier_transform#Relationship_to_sampling" rel="nofollow">adding shifted copies</a>, but we're again back to summing an infinite sequence. Any ideas on how to approach DTFT derivations in a smarter way? Or is sweating out the tiring summation the only way? I don't have any particular function in mind, but if one has to be chosen as an example, I'd pick $x(t) = \text{sech}(a t)$. The continuous FT of this is known and can be found in the table on Wikipedia (#208).</p>
<p>As an example, the rules of series can work. A typical example is the $x[n] = a^n u[n]$ where $|a| &lt; 1$. For instance, say we want to find the DTFT of the signal $x[n] = 0.9^n u[n]$. Then,</p> <p>$$X(e^{j\omega}) = \sum_{n = -\infty}^{\infty} 0.9^n u[n] e^{-j\omega n} \\ = \sum_{n = 0}^{\infty} (0.9 e^{-j\omega})^n$$</p> <p>This is where the series work (geometric power series):</p> <p>$$\sum_{n = 0}^{\infty} a^n = \frac{1}{1-a}$$</p> <p>Then our DTFT is:</p> <p>$$X(e^{j\omega}) = \frac{1}{1 - 0.9 e^{-j\omega}}$$</p>
50
Fourier transform
Getting bpm of song with fft
https://dsp.stackexchange.com/questions/14717/getting-bpm-of-song-with-fft
<p>I would like to get the bpm of a song analyzing the spectrum of the volume. Doing a fft what I get is a peak at the origin and of course that can't be the frequency corresponding to the bpm, so I do the following:</p> <p>$\overline{h} = h - \frac{1}{l}\sum_0^l h$</p> <p>where $h$ is the fft of the volume and $l$ is the length of the signal.</p> <p>Now I expect the bpm frequency to show up in the interval $[1 Hz,3Hz]$ but how can I recognize it?</p>
<p>Unless your FFT is very large you're not going to get much resolution in the range of interest since each FFT bin holds Fs/N Hz of spectrum. ( Fs = sample rate, N=FFT size).</p> <p>I have successfully got very accurate BPM values by using two FFT's in series and then picking the biggest peak in the region of interest in the final averaged spectrum.</p> <p>IME this works well for signals with a reasonably prominent beat. e.g. typical modern pop &amp; rock music. </p> <p> </p>
51
Fourier transform
Sine of frequency 0 contains sines of all frequencies at once in it
https://dsp.stackexchange.com/questions/15333/sine-of-frequency-0-contains-sines-of-all-frequencies-at-once-in-it
<p>You know that a sine corresponds to a pulse by J.Fourier transform. The lower is the frequency, the closer is the pulse to the origin. A constant signal is a sine (or cosine, that may be important) of frequency 0. It is a pulse in the origin. This is ok, since <a href="https://dsp.stackexchange.com/questions/9842">we know very well that a pulse transforms to the white spectrum: it is a combination of all sine waves at once</a>. </p> <p>But, I see a contradiction here. I have just shown that a constant is also a sine of frequency 0 and, also, a combination of all frequencies. A sine of frequency 0 is a sum of all frequencies. How is this possible?</p>
<p>You are confusing time domain and frequency domain. A constant time domain function (a sine with frequency 0, if you like) corresponds to a delta impulse at the origin (i.e., frequency zero!) in the <em>frequency domain</em>. A delta impulse at frequency zero is zero for all other frequencies $\omega\neq 0$. Consequently, a constant in the time domain is not a combination of all frequencies. What you were probably thinking of is a delta impulse in the <em>time domain</em>, which corresponds to a constant in the frequency domain (i.e., it contains all frequencies).</p> <p>So, summarizing, a constant in one domain corresponds to a delta impulse in the other domain. A constant in the time domain does not contain any frequencies other than 0 (delta at zero in the frequency domain). A constant in the frequency domain (i.e. a combination of all frequencies) corresponds to a delta impulse in the time domain.</p>
52
Fourier transform
What is the difference between multiplying a delta and a step versus convolving a delta and a step?
https://dsp.stackexchange.com/questions/20418/what-is-the-difference-between-multiplying-a-delta-and-a-step-versus-convolving
<p>Seems both will produce another step. there is no difference? Thanks</p>
<p>First of all you need to see whether you are performing these operations for a continous time signal or discrete time signal.</p> <p>Sampling theorem says that multiplication of a signal $x(t).\delta(t)$=$x(0).\delta(t)$ provided $x(t)$ is continous at $t=0$. But here in your question $x(t)$ is a unit step function which is not continous at $t=0$. Hence the multiplication of $\delta(t).u(t)$ is not defined.</p> <p>In the discrete case the same property is $\delta[n].x[n]$=$x[0].\delta[n]$ and there is no question of continuity as signal is discrete. In this case your question can be written as $u[n].\delta[n]$=$u[0].\delta[n]$=$\delta[n]$ as $u[0]$ is equal to 1.</p> <p>Now the convolution property says that $x(t)*\delta(t-t_0)$=$x(t-t_0)$ where $*$ is denoted here as the convolution operator. In your question this would be $u(t)*\delta(t)$=$u(t)$ which is nothing but unit step function for continous time.</p> <p>The same is true for discrete signals i.e.$x[n]*\delta[n-n_0]$=$x[n-n_0]$. So for discrete case your question would be $u[n]*\delta[n]=u[n]$ which is a unit step function for the discrete case.</p> <p>You can look at comments of Matt below my answer for better clarity. </p>
53
Fourier transform
GSP as an extenstion of DSP
https://dsp.stackexchange.com/questions/68291/gsp-as-an-extenstion-of-dsp
<p>I am a PhD. in pure mathematics. </p> <ol> <li>Could you please illustrate the following statement: the eigenvectors of a graph Laplacian behave similarly to a Fourier basis, motivating the development of graph-based Fourier analysis theory.</li> <li>I am reading the interesting <a href="http://www.eusipco2016.org/documents/391615/2699873/GSP_Tutorial_EUSIPCO_V1.pdf" rel="nofollow noreferrer">paper</a>, but could not get how the Fourier transform is extended to the graph Fourier transform as illustrated on page 23. Indeed, why should the matrix V be considered as the extension of (discrete) Fourier transform?</li> </ol>
54
Fourier transform
How to get a non-equally spaced FFT back into the time domain
https://dsp.stackexchange.com/questions/74937/how-to-get-a-non-equally-spaced-fft-back-into-the-time-domain
<p>I have a signal that I STFT and then filter using an ERB spaced filterbank. At some point after this I want to get the signal back into the time domain, how can I go about this? Using a standard iSTFT function won't work because it assumed linearly spaced frequency bins, AFAIK? I've put a code snippet below.</p> <p>I'm also not sure what what to tag this question as apart from <code>fourier-transform</code></p> <pre><code>Y = stft(sig) # Y.shape = (1025,4000) fb = filterbank() # fb.shape = (20,1025) Y_erb = matrix_multiply(fb,Y) # Y_erb.shape = (20,4000) <span class="math-container">```</span> </code></pre>
<p>The short answer is that you can solve a least squares problem with the input signal as the decision variables.</p> <h3>About Nonequispaced FFT</h3> <p>There is a <a href="https://stackoverflow.com/questions/67350588/example-python-nfft-fourier-transform-issues-with-signal-reconstruction-normal">related question in stack overflow</a>.</p> <p>If you are not using python, you can take a look into the <a href="https://github.com/NFFT/nfft" rel="nofollow noreferrer">nfft github</a>.</p> <p><a href="https://www-user.tu-chemnitz.de/%7Epotts/paper/nfft3.pdf" rel="nofollow noreferrer">Here</a> will find some information. Section 4.6 covers the inversion of such transforms.</p>
55
Fourier transform
What frequencies are present in the Fourier transform of the Dirac impulse?
https://dsp.stackexchange.com/questions/51085/what-frequencies-are-present-in-the-fourier-transform-of-the-dirac-impulse
<p>When I do the Fourier transform of the Dirac impulse I get a pure sinusoid (or complex exponential, however you wanna call it) but I read in several places that all frequencies are present in the dirac impulse and all of them with the same amplitude. How is this possible? Am I wrong when I perform the transform?</p>
<p>A Dirac impulse <span class="math-container">$x(t)=\delta(t-d)$</span> has the continuous-time Fourier transform <span class="math-container">$X(\Omega)$</span> of <span class="math-container">$$\mathcal{F}\{\delta(t-d) \} = 1 e^{-j\Omega d} $$</span></p> <p>whose <strong>magnitude</strong> is <span class="math-container">$$|X(\Omega)| = 1 ~~~, \text{ for all } \Omega $$</span> and a <strong>phase</strong> of <span class="math-container">$$\angle X(\Omega) = -\Omega \cdot d $$</span></p> <p>Note that it's incomplete to think of the real or imaginary parts of the Fourier transform alone. Rather the magnitude and phase point of views are more reflective of the nature of the result.</p> <p>So in this case the magnitude is <span class="math-container">$1$</span> and hence it's said to contain all frequencies of magnitude <span class="math-container">$1$</span>. Note that these are differential amplitude components of <strong>continuum</strong> frequency range as opposed to a finite amplitude of discrete set of frequency components, aka line components.</p>
56
Fourier transform
Solving Fouriertransform exercises without explicitly doing the transform
https://dsp.stackexchange.com/questions/76059/solving-fouriertransform-exercises-without-explicitly-doing-the-transform
<p>Hey there in the signal processing course I am studying there is an excercise that reads:</p> <p>The sequence <span class="math-container">$x(n)$</span> is given <span class="math-container">$x(n)=\{-1\quad2\quad \underline{-3}\quad 2\quad -1\}$</span> and the fouriertransform <span class="math-container">$X(\omega)$</span>. Without explicitly performing the fouriertransform solve the following: Sekvensen x(n) är given av x(n) =</p> <p>a) <span class="math-container">$X(0)$</span></p> <p>b) <span class="math-container">$argX(\omega)$</span></p> <p>c)<span class="math-container">$\int_{-\pi}^{\pi}X(\omega)d\omega$</span></p> <p>d)<span class="math-container">$X(\pi)$</span></p> <p>e)<span class="math-container">$\int_{-\pi}^{\pi}|X(\omega)|^2d\omega$</span></p> <p>I could not figure out how to solve any of this without doing the actual transform. I eventually solved all of it with the transform, but still I am none the wiser how to solve any of this without doing the transfom. Suggestions?</p> <p>Please and thank you!</p>
<p>Some key take-aways /properties to know about the Fourier Transform will help reveal the answers. There is a method to the madness in extracting some key take-aways and high level understanding of what the Fourier Transform represents that makes this exercise useful.</p> <p>The line under the 3 indicates the assumed position of the vertical axis, meaning <span class="math-container">$t=0$</span> and thus we see we have a symmetric real waveform in time. A symmetric real waveform will always have a real transform in frequency (and vice versa). This is referred to as an even function. Similarly an odd function (where the positive is the same as the negative but sign reversed) will always have an imaginary transform. (This is the proof, once those points are proven, that a causal function in time (as the sum of an even and odd function) must always have a complex transform. Remember this relationship, it's useful.</p> <p>So knowing that allows you to solve (B).</p> <p>To solve (A), consider what X(0) represents. (hint, what is the Fourier Transform of a 9V battery??). Without actually solving the Fourier Transform, take a look at the formula when <span class="math-container">$\omega=0$</span> and see what it simply is for that case. Remember that one too!</p> <p>To solve (D), do the same as above when <span class="math-container">$\omega$</span> is <span class="math-container">$\pi$</span> and look what happens to the formula as a simplification. Note how +1, -1, +1 , -1 ...comes into the picture and how easy it can make it to do that one in your head the next time around.</p> <p>Note how (C) and (E) are integrated over the entire unique range of <span class="math-container">$\omega$</span>. Go through a few cases to see what always occurs in that integration between the time domain signal and the frequency domain signal. Studying the formulas carefully will help you to do that again in your head in the future. Look into Parseval's Theorem and really understand what it is describing.</p> <p>Hope this helps!</p>
57
Fourier transform
Can the magnitude of a discrete-time Fourier transform be negative?
https://dsp.stackexchange.com/questions/79374/can-the-magnitude-of-a-discrete-time-fourier-transform-be-negative
<p>Consider the discrete-time system <span class="math-container">$$ H(z) = 1 + z^{-1} + z^{-2} + z^{-3} $$</span> To obtain the magnitude of the discrete-time Fourier transform, I substitute <span class="math-container">$z = e^{j\omega}$</span> to get <span class="math-container">\begin{align} H(\omega) &amp;= 1 + e^{-j\omega} + e^{-2j\omega} + e^{-3j\omega} \\ &amp;= e^{-\frac{3}{2}j\omega} \cdot \left(e^{\frac{3}{2}j\omega} + e^{\frac{1}{2}j\omega} + e^{-\frac{1}{2}j\omega} + e^{-\frac{3}{2}j\omega}\right) \\ &amp;= e^{-\frac{3}{2}j\omega} \cdot \left(2 \cos\left(\frac{3}{2}\omega\right) + 2 \cos\left(\frac{1}{2}\omega\right)\right) \end{align}</span> As <span class="math-container">$H(\omega)$</span> is now in polar form, such that <span class="math-container">\begin{align} r(\omega) &amp;= 2 \cos\left(\frac{3}{2}\omega\right) + 2 \cos\left(\frac{1}{2}\omega\right) \\ \theta(\omega) &amp;= -\frac{3}{2}\omega \end{align}</span> Then the magnitude of <span class="math-container">$H(\omega)$</span> is just <span class="math-container">$r(\omega)$</span>. However, <span class="math-container">$r(\omega)$</span> can be negative for some <span class="math-container">$\omega$</span>, which does not make sense to me, as the magnitude of a complex number can't be negative. What am I missing?</p>
<p>You can't conclude that the magnitude response is <span class="math-container">$r(\omega)$</span> and the phase response is <span class="math-container">$\theta(\omega)$</span>.</p> <p>Note that <span class="math-container">$e^{j\pi}=-1$</span>, at the frequencies that <span class="math-container">$r(\omega) &lt; 0$</span>, let <span class="math-container">$\phi(\omega) = \theta(\omega) + \pi$</span> and you get the non-negative magnitude response.</p> <p>The real magnitude response is <span class="math-container">$|r(\omega)|$</span> and the phase response is</p> <p><span class="math-container">$$ \phi(\omega)=\left\{ \begin{aligned} &amp;\theta(\omega) , &amp; r(\omega)\geq 0 \\ &amp;\theta(\omega) + \pi , &amp; r(\omega)&lt;0 \end{aligned} \right. $$</span></p>
58
Fourier transform
Where did I make the mistake in the Fourier transform?
https://dsp.stackexchange.com/questions/79568/where-did-i-make-the-mistake-in-the-fourier-transform
<p><span class="math-container">$$ \begin{align} X(f) &amp; = \int_{-\infty}^{\infty} x(t)e^{-j2\pi ft}dt &amp; \\ &amp; = \int_{-\infty}^{\infty} x(t)\left(e^{-j2\pi} \right)^{ft}dt &amp; \;\;\mathrm{where}\; e^{-j2\pi}=1 \\ &amp; = \int_{-\infty}^{\infty} x(t) (1)^{ft}\; dt, &amp;\;\;\mathrm{but}\ \; 1^{ft} \; \mathrm{is\;always } \; 1 \\ &amp; = \int_{-\infty}^{\infty} x(t) dt, &amp;\;\;\mathrm{i.e.\; the\; area\; under} \; x(t) \; \mathrm{from\; -\infty\; to\; +\infty} \end{align} $$</span></p> <p>But this seems very weird!!!</p> <p>Where is my mistake?</p>
<p><a href="https://en.wikipedia.org/wiki/Exponentiation#Complex_exponents_with_a_positive_real_base" rel="nofollow noreferrer">See this Wikipedia article.</a></p> <p>You can only multiply exponents if they are real. Let's look at the example of <span class="math-container">$f\cdot t = 1/4$</span></p> <p><span class="math-container">$$ e^{-j2\pi\frac{1}{4}} = e^{-j\frac{\pi}{2}} = -j \neq 1$$</span></p>
59
Fourier transform
Fourier Transform of the signum function, using the integral property
https://dsp.stackexchange.com/questions/80888/fourier-transform-of-the-signum-function-using-the-integral-property
<p>Cheers, I am trying to find the fourier transform of the signum function, which is</p> <p><span class="math-container">$$ \operatorname{sgn}(t) \triangleq \begin{cases} 1 \qquad &amp; t&gt;0 \\ 0 \qquad &amp; t=0 \\ -1 \qquad &amp; t&lt;0 \\ \end{cases} $$</span></p> <p>I rewrite this as:</p> <p><span class="math-container">$$\operatorname{sgn}(t) = 2u(t) -1$$</span></p> <p>and find it's first derivative which is:</p> <p><span class="math-container">$$\operatorname{sgn}'(t) = 2 \delta(t)$$</span></p> <p>and using the integration rule I know that:</p> <p><span class="math-container">$$\begin{align} \mathscr{F}\{\operatorname{sgn}(t)\} &amp;= \mathscr{F}\left\{\int_{-\infty}^t2\delta(ρ)dρ \right\} \\ &amp;= \frac{1}{j\omega}2 + \pi X(0)\delta(\omega) \\ \end{align}$$</span></p> <p>I need to find to prove that <span class="math-container">$X(0)=0$</span>, as the fourier tranform of the signum function is <span class="math-container">$\frac{2}{j\omega}$</span>, but I think this transformation always yields 1. Is what I am thinking correct, and if yes how would I go about this? I know that there are alternatives, but I am just checking to see alternative ways to prove things I already know. Thanks =)</p> <p>Edit: I tried the following thing: I split the Fourier of <span class="math-container">$\operatorname{sgn}(\cdot)$</span> to the Fourier of the unit step and the -1 constant. Then, I get that</p> <p><span class="math-container">$$\mathscr{F}\{2u(t)\} = \frac{2}{j\omega} + \pi \delta(\omega) $$</span></p> <p>and</p> <p><span class="math-container">$$\mathscr{F}\{1\}= \pi \delta(\omega)$$</span></p> <p>so by subtracting, I get the correct thing. Is that the way to do it?</p>
<p>By taking the derivative you loose all information about the DC value of the original signal. Any signal</p> <p><span class="math-container">$$x(t) = 2\cdot u(t)- a$$</span></p> <p>has the same derivative, regardless of what <span class="math-container">$a$</span> is. So you do have to calculate the DC value by hand, which is simply the mean of the signal.</p> <p><span class="math-container">$$X(0) = \lim_{\tau \to \infty} \int_{-\tau}^{+\tau} \operatorname{sgn}(t) \, \mathrm{d}t = 0 $$</span></p>
60
Fourier transform
Positive or negative sign on Fourier transform formula
https://dsp.stackexchange.com/questions/26221/positive-or-negative-sign-on-fourier-transform-formula
<p>I have seen both the formula of Fourier transform with positive and negative sign on exponential as $$ X(\omega)=\int_{-\infty}^{\infty} x(t)e^{-j\omega t}dt$$ and $$ X(\omega)=\int_{-\infty}^{\infty} x(t)e^{j\omega t}dt$$ I am confused which one is the correct formula. I also solved for Fourier transform by taking the following example $$x(t)=\begin{cases} 1, \hspace{5mm} \text{for} \hspace{2mm} |t|&lt;1 \\0, \hspace{5mm} \text{for} \hspace{2mm} |t|&gt;1 \end{cases}$$ and got the same result as $$ X(\omega)=\begin{cases} 2\frac{\text{sin}\omega}{\omega}, \hspace{5mm} \text{when} \hspace{2mm} \omega \neq 0 \\2, \hspace{13mm} \text{when} \hspace{2mm} \omega = 0\end{cases}$$ Can anyone explain whether both the formula for Fourier transform are correct or not?</p>
<p>The definition with the negative in the exponent is the accepted definition of the Fourier transform... however, this is an arbitrary choice. It could just as easily be defined with $e^{jw}$ and the inverse transform with $e^{-jw}$. </p>
61
Fourier transform
Approximation of a complex-valued function of real variable
https://dsp.stackexchange.com/questions/81900/approximation-of-a-complex-valued-function-of-real-variable
<p>I have a problem to approximate a complex-valued function of a real argument. In other words, how to find a function’s analytic form in the complex domain if the sets of values of a function of z=x+yi type and arguments n are given. Thus, if the sequences of real arguments (n) and complex-valued functions (z=x+yi) are given so that n is a set of real numbers, for the simplicity, say integers, and z is a set of complex numbers of (a+bi) type, is it possible to recover (approximate) the analytic form of the function z = f(n) in any type of approximation. Note that x=f1(n) and y=f2(n), by the definition of a complex-valued function of a real variable.</p>
<p>The analytic signal is given as:</p> <p><span class="math-container">$$x_a(t) = x(t) + j \hat x(t)$$</span></p> <p>Where</p> <p><span class="math-container">$x_a(t)$</span> is the complex analytic signal</p> <p><span class="math-container">$x(t)$</span> is a real signal</p> <p><span class="math-container">$\hat x(t)$</span> is the Hilbert transform of <span class="math-container">$x$</span>.</p> <p>Thus in the context of the OP's question, the real argument is the function <span class="math-container">$x(t)$</span> such that &quot;x=f1(n)&quot; is simply &quot;x=x&quot; and the imaginary component is the Hilbert transform of x such that &quot;y=f2(n)&quot; is the Hilbert Transform itself.</p> <p>The Hilbert Transform is the result of convolving the function <span class="math-container">$x(t)$</span> (typically real, but need not be) with <span class="math-container">$1/(\pi t)$</span> and is often done instead in the frequency domain due to the simplification of doing a product instead of convolution. Further, in the frequency domain, the Hilbert Transform is simply multiplying by <span class="math-container">$-j$</span> when the frequency is positive, or by <span class="math-container">$j$</span> when the frequency is negative.</p> <p><a href="https://i.sstatic.net/8eDoH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8eDoH.png" alt="Hilbert Transform" /></a></p>
62
Fourier transform
is $y(t) = (x(t))^2$ non-linear and time-invariant system?
https://dsp.stackexchange.com/questions/82269/is-yt-xt2-non-linear-and-time-invariant-system
<p>i was able to show that it is not linear but for time-invariant I am not sure. <span class="math-container">$y(t) = (x(t))^2$</span></p> <p>Let <span class="math-container">$y(t)$</span> be the output corresponding to the input <span class="math-container">$x(t).$</span> Let <span class="math-container">$x_T(t) = (x(t-T))^2.$</span> Then the output <span class="math-container">$y_T(t)$</span> corresponding to the input <span class="math-container">$x_T(t)$</span> is</p> <p><span class="math-container">$y_T(t) = (x_T(t-T))^2 = (x(t-T))^2 = y(t-T)$</span> -&gt; time-invariant</p> <p>Thanks for the help!</p>
<p>The system defined through the transfer function expression: <span class="math-container">$$ y(t)=(x(t))^2 $$</span> is <em>Time Invariant</em>, since the time is explicitely not included in the expression, besides <span class="math-container">$x(t)$</span>, and <span class="math-container">$y(t)$</span>.</p> <p>Also, as you can see, the expression <span class="math-container">$H(x,t)=x^2$</span> is non linear in <span class="math-container">$x$</span>, so the system is thus <em>Non Linear</em>.</p> <p><strong>How to Check if a System is Time Invariant?</strong></p> <p>In general, for a system defined as: <span class="math-container">$$ y(t)=H(x(t),t) $$</span></p> <p>You can assess Time Invariance by comparing the output with the input delayed <span class="math-container">$H(x(t-\tau),t) $</span> with respect to the delayed output <span class="math-container">$y(t-\tau)$</span>: <span class="math-container">$$ H \text{ Time Invariant}: \\ y(t-\tau)=H(x(t-\tau),t-\tau)=H(x(t-\tau),t) $$</span></p> <p>This is evident. The condition requests that the <span class="math-container">$t$</span> variable present in <span class="math-container">$H(x,t)$</span> does not exist.</p> <p>In our case, this is again evident, since <span class="math-container">$H(x,t)=x^2$</span> and there is no time variable <span class="math-container">$t$</span>.</p> <p>Compare with this Time Variant system: <span class="math-container">$y(t)=x(t)^t$</span>. Now you can realize the difference.</p> <p><strong>How to Check if a System is Linear?</strong></p> <p>Finally, you can assess a Linear System by checking the Transfer Function is Linear in <span class="math-container">$x$</span>: <span class="math-container">$$ H \text{ Linear}: \\ H(ax_1(t),t)=aH(x_1(t),t) \ \forall a\\ H(x_1(t)+x_2(t),t)=H(x_1(t),t)+H(x_2(t),t)\\ $$</span></p> <p>Note, Linearity is not applied in the <span class="math-container">$t$</span> variable. Only in the inputs through outputs.</p> <p>As you can see, the system is Non Linear, since <span class="math-container">$(ax(t))^2 \ne ax(t)^2$</span> and <span class="math-container">$(x_1(t)+x_2(t))^2 \ne x_1(t)^2+x_2(t)^2$</span> which fail both asserts.</p> <p><strong>What happens when I don't have the expression H in real life?</strong></p> <p>As you can see, all these conditions are easy to test for numerical expressions, but in real life, you have to test them case by case. <em>Almost always</em>, they will fail, hence, the Time Variant, Non Linear will be at some degree the often system case to deal with.</p> <p><strong>Simple Examples of Time Invariance and Linearity</strong></p> <p>As above, here are few very simple examples of systems, defined through diferent expressions for <span class="math-container">$H(x,t)$</span>. You can see now your system is Non Linear Invariant, since the transfer function expression is non linear in <span class="math-container">$x$</span>:</p> <ul> <li><span class="math-container">$H(x,t)=ax$</span>: Linear Time Invariant System</li> <li><span class="math-container">$H(x,t)=ax^2$</span>: Non-Linear Time Invariant System</li> <li><span class="math-container">$H(x,t)=tx$</span>: Linear Time Variant System</li> <li><span class="math-container">$H(x,t)=tx^2$</span>: Non-Linear Time Variant System</li> </ul>
63
Fourier transform
Spectrally flat binary sequence
https://dsp.stackexchange.com/questions/72894/spectrally-flat-binary-sequence
<p>I'm trying to construct a binary sequence of length <span class="math-container">$2^n$</span>. This sequence will be converted to a square signal of <span class="math-container">$\pm 1$</span>, where 0 produces <span class="math-container">$-1$</span> and 1 produces <span class="math-container">$1$</span>. I want the resultant signal to be as spectrally flat as possible, minimizing the <span class="math-container">$L^2$</span> norm of the continuous Fourier transform. Is there a reason to believe that this problem is hard, such as by equivalence to a known mathematical problem like subset sum, or is there a solution I'm overlooking?</p> <p>Things I've checked are <a href="https://web.media.mit.edu/%7Eraskar/deblur/" rel="nofollow noreferrer">Raskar's sequence</a>, which uses exhaustive search, URA/MURA, and <a href="https://doi.org/10.1145%2F1276377.1276464" rel="nofollow noreferrer">Levin 2007</a>. The best method I've considered is genetic optimization, but I'd prefer an optimal solution over a searched one.</p> <p>The motivation is that I have a reverb with Dirac delta spikes in pre-defined locations, and must choose a sign <span class="math-container">$\pm 1$</span> for each spike. Setting all signs to <span class="math-container">$-1$</span> produces a pitch determined by the locations, and setting signs randomly eliminates the pitch. This doesn't quite match the problem statement given above, but I think a solution for the above will lead to a solution for this.</p>
<p>Golay Complementary Sequences are spectrally flat. See <a href="https://www.isg.rhul.ac.uk/%7Ekp/golaysurvey.pdf" rel="nofollow noreferrer">https://www.isg.rhul.ac.uk/~kp/golaysurvey.pdf</a> or <a href="https://www.sfu.ca/%7Ejed/Papers/Davis%20Jedwab.%20Golay%20Reed-Muller.%201999.pdf" rel="nofollow noreferrer">https://www.sfu.ca/~jed/Papers/Davis%20Jedwab.%20Golay%20Reed-Muller.%201999.pdf</a></p> <p>The DTFT magnitude spectrums of Golay complementary sequences are flat, the maximum to mean value squared magnitude ratios are upper bounded by <span class="math-container">$2$</span> <span class="math-container">$\left( 3 dB \right)$</span>, for any length <span class="math-container">$2^{n}$</span>. The construction is easy.</p> <p>Starting from two sequences <span class="math-container">$A_{1} = \left[ 1 ~ 1 \right]$</span>, <span class="math-container">$B_{1} = \left[ 1 ~ -1 \right]$</span>. Let <span class="math-container">$A_{2} = \left[ A_{1}, B_{1} \right] = \left[ 1 ~ 1 ~ 1 ~ -1 \right]$</span>, <span class="math-container">$B_{2} = \left[ A_{1}, -B_{1} \right] = \left[ 1 ~ 1 ~ -1 ~ 1 \right]$</span>. Continue to get longer length <span class="math-container">$2^{n}$</span> sequences:</p> <p><span class="math-container">$A_{n} = \left[ A_{n-1}, B_{n-1} \right]$</span>, <span class="math-container">$B_{n} = \left[ A_{n-1}, -B_{n-1} \right]$</span>. Use either <span class="math-container">$A_{n}$</span> or <span class="math-container">$B_{n}$</span>.</p> <p>There are ways to construct <span class="math-container">$0.5 \left( n! \right) 2^{\left( n + 1 \right)}$</span> different length <span class="math-container">$2^{n}$</span>, {1, -1}-valued Golay complementary sequences (see Davis and Jedwab's paper), but the spectrally flat property is valid for all Golay complementary sequences, so they are all equally good.</p>
64
Fourier transform
Can you quickly find the inverse Fourier Transform using the duality property?
https://dsp.stackexchange.com/questions/79526/can-you-quickly-find-the-inverse-fourier-transform-using-the-duality-property
<p>Cheers, in an exercise of mine I reach the point that I have to find the <span class="math-container">$F^{-1}\{Λ(ω)\}$</span> (where <span class="math-container">$Λ(ω)$</span> is the triangle function, with <span class="math-container">$1-|ω|$</span> for <span class="math-container">$|ω| \leq 1 $</span> and 0 elsewhere. Using the duality property, I know that I will have to end up with a <span class="math-container">$sinc$</span> function, and I also that for <span class="math-container">$x(t) = Atri(\frac{t}{T})$</span> we get the transform <span class="math-container">$X(ω) = F\{x(t)\} = \frac{\sin^2(πf)}{(πf)^2}$</span>. Is there a quick way to find the result of <span class="math-container">$\frac{1}{2π}\frac{sinc^2(\frac{t}{2})}{(t/2)^2}$</span> without having to use the definition with the integral? Thanks</p>
<p>You know a Fourier transform pair</p> <p><span class="math-container">$$x(t)\Longleftrightarrow X(\omega)\tag{1}$$</span></p> <p>with</p> <p><span class="math-container">$$\mathcal{F}\big\{x(t)\big\}=X(\omega)=\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt\tag{2}$$</span></p> <p>and</p> <p><span class="math-container">$$\mathcal{F}^{-1}\big\{X(\omega)\big\}=x(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}X(\omega)e^{j\omega t}d\omega\tag{3}$$</span></p> <p>Now you want to find the inverse Fourier transform of <span class="math-container">$x(\omega)$</span>:</p> <p><span class="math-container">$$\mathcal{F}^{-1}\big\{x(\omega)\big\}=\frac{1}{2\pi}\int_{-\infty}^{\infty}x(\omega)e^{j\omega t}d\omega\tag{4}$$</span></p> <p>Comparing <span class="math-container">$(4)$</span> with <span class="math-container">$(2)$</span> you can see that</p> <p><span class="math-container">$$\mathcal{F}^{-1}\big\{x(\omega)\big\}=\frac{1}{2\pi}X(-t)\tag{5}$$</span></p> <p>So from</p> <p><span class="math-container">$$\mathcal{F}\big\{x(t)\big\}=X(\omega)\tag{6}$$</span></p> <p>it follows that</p> <p><span class="math-container">$$\mathcal{F}^{-1}\big\{x(\omega)\big\}=\frac{1}{2\pi}X(-t)\tag{7}$$</span></p> <p>If you use the unitary definition of the Fourier transform with frequency variable <span class="math-container">$f=\omega/2\pi$</span> then you can get rid of the factor <span class="math-container">$1/2\pi$</span> in the duality formula <span class="math-container">$(7)$</span>.</p>
65
Fourier transform
Recover Fourier Transform of flipped signal from the FFT of orignal signal
https://dsp.stackexchange.com/questions/82725/recover-fourier-transform-of-flipped-signal-from-the-fft-of-orignal-signal
<p>I trying to recover the Fourier Transform of a flipped signal directly from the Fourier transform of the original signal.</p> <p>More precisly, let <code>s</code> be a random signal:</p> <pre><code>s = np.random.randn(n) </code></pre> <p>Let <code>s1_fft</code> and <code>s2_fft</code> the Fourier Transform of the signal <code>s</code>.</p> <pre><code>s1_fft = np.fft.fft(s) s2_fft = np.fft.fft(s[::-1]) </code></pre> <p>I am trying the find the operation the get <code>s2_fft</code> from <code>s1_fft</code> withtout having the go back to the time domain.</p> <p>I'am actually trying to do that with the 2d Fourier Transorm.</p> <p>Thank you,</p>
<p>We can start with a the simple DFT relationship of the time reversal, i.e.</p> <p>If <span class="math-container">$ \mathcal{F} (x[n]) = X[k] $</span>, then <span class="math-container">$ \mathcal{F} (x[-n]) = X'[k] $</span>, where <span class="math-container">$'$</span> denotes complex conjugation.</p> <p>Now flipping the vector as in your code is NOT just a simple time flip. For a time flip the sample at <span class="math-container">$n=0$</span> should stay put. So vector flipping as in your code performs a time flip AND a circular shift by one sample. That shift corresponds to multiplication with <span class="math-container">$e^{j2\pi k/N}$</span>, where <span class="math-container">$N$</span> is the FFT length.</p> <p>So in order to restore the FFT from the vector flip, you need to undo the shift and then take the conjugate. Here is how this would look in Matlab</p> <pre><code>%% FFT of a vector flip % random vector nx = 1024; x = randn(nx,1); fx = fft(x); % flip and FFT y = x(end:-1:1); fy = fft(y); % multiply with inverse shift operator and multiply w = exp(-1i*2*pi*(0:nx-1)'./nx); % inverse cicular shift fz = conj(fy.*w); % calculate error fprintf('Error = %6.2fdB\n', 10*log10(mean(abs((fz-fx)).^2)./mean(abs(fx.^2)))); </code></pre>
66
Fourier transform
Interpretation of complex time-domain signal resulting from time-shift property of Fourier transform
https://dsp.stackexchange.com/questions/83217/interpretation-of-complex-time-domain-signal-resulting-from-time-shift-property
<p>I am currently working on simulating RF transmissions for beamforming and other applications in Matlab.</p> <p>One of the fundamental properties that I need to simulate is signal propagation delay due to transmission distance. This can either be done by generating the signal <span class="math-container">$s(t-\tau)$</span> with offset <span class="math-container">$\tau = d / c$</span> where <span class="math-container">$c$</span> is the speed of light and <span class="math-container">$d$</span> the transmission distance, or by utilising the Fourier transform property <span class="math-container">$s(t-\tau) = \mathcal{F}^{-1}(\mathcal{F}(s(t)) \exp(-j2\pi f\tau))$</span> after the fact.</p> <p>However, the Fourier transform method produces complex time domain signals in practice. I wanted to confirm whether the imaginary component produced in this instance is a result of insufficient computational precision, or if the imaginary component has some interpretation as an IQ signal (and if so, how to interpret this IQ data given that there's no carrier involved in this process).</p> <p>Below is a minimum working example in Matlab to demonstrate.</p> <pre><code>N = 100; % number of data points t = linspace(0, 2*pi, N+1); t(end) = []; % time vector dt = t(2) - t(1); % time delta s = cos(5*t) + cos(3*t) + cos(t); % some baseband signal fs = 1 / dt; % sample rate f = linspace(-fs/2, fs/2, N+1); f(end) = []; % frequency vector tau = 0.5*dt; % chosen delay (fractional sample) s_delayed = ifft(ifftshift(fftshift(fft(s)) .* exp(-1j*2*pi*f*tau))); % delay in fourier domain % plot original and delayed signal figure, plot(t, s) hold on, plot(t, real(s_delayed)); plot(t, imag(s_delayed)); legend('original', 'real of time delayed', 'imag of time delayed') </code></pre>
<p>It is nothing about the numerical precision in your case, the main reason is the fractional delay. We know that phase shift of DFT corresponds to a <strong>circular shift</strong> in time domain. The DFT of <span class="math-container">$x[n]$</span> is <span class="math-container">$$ X[k] = \sum_{n=0}^{N-1} x[n] e^{-j2\pi kn/N} $$</span></p> <p>and its time delayed signal <span class="math-container">$x[n-D]$</span> has a DFT <span class="math-container">$$ \begin{aligned} \text{DFT}\{x[n-D]\} &amp;= \sum_{n=0}^{N-1} x[n-D] e^{-j2\pi kn/N} \\ &amp;=\sum_{m=0}^{N-1} x[m] e^{-j2\pi km/N} e^{-j2\pi kD/N} \\ &amp;= X[k] e^{-j2\pi kD/N} \end{aligned} $$</span></p> <p>For any real-valued sequence <span class="math-container">$x[n]$</span> we have the following facts:</p> <ul> <li><span class="math-container">$X[0]$</span> is real, and equals to <span class="math-container">$\sum_n x[n]$</span></li> <li><span class="math-container">$X[N/2]$</span> is real if <span class="math-container">$N$</span> is even, and equals to <span class="math-container">$\sum_n (-1)^n x[n]$</span></li> </ul> <p>Apparently <span class="math-container">$x[n-D]$</span> is a real sequence and should follows the above properties. Let's check it out:</p> <ul> <li><span class="math-container">$X[0] e^{-j2\pi 0 D/N} = X[0]$</span> is real</li> <li><span class="math-container">$X[N/2] e^{-j2\pi (N/2) D/N} = X[N/2] e^{-j\pi D}$</span> is a real number only if <span class="math-container">$D$</span> is an integer. So if you want a fractional delay <span class="math-container">$D$</span>, you won't get a real-valued IDFT result.</li> </ul> <p>Here's a modified matlab code. Check the value of <code>S(1)</code>, <code>S(51)</code>, <code>phaseshift(1)</code>, <code>phaseshift(51</code>, <code>S_delayed(1)</code>, <code>S_delayed(51)</code> when you change delay <code>D</code>. You may notice that <code>s_delayed</code> has very small imaginary parts even if an integer delay is chosen, that is because of the computational precision. In this case you can use <code>ifft(Y, 'symmetric')</code> to force the output to be real.</p> <pre><code>N = 100; % number of data points t = linspace(0, 2*pi, N+1).'; t(end) = []; % time vector dt = t(2) - t(1); % time delta s = cos(5*t) + cos(3*t) + cos(t); % some baseband signal k = (0:N-1).'; D = 5; % chosen delay phaseshift = exp(-1j*2*pi*k*D/N); S = (fft(s)); S_delayed = S .* phaseshift; s_delayed = ifft((S_delayed)); % delay in fourier domain % plot original and delayed signal figure, plot(t, s) hold on, plot(t, real(s_delayed)); plot(t, imag(s_delayed)); legend('original', 'real of time delayed', 'imag of time delayed') </code></pre>
67
Fourier transform
What is the meaning of $Ta_k$ of fourier series or transform?
https://dsp.stackexchange.com/questions/9050/what-is-the-meaning-of-ta-k-of-fourier-series-or-transform
<p>What is the meaning of $Ta_k$ of fourier series or transform? I am taking a course on signal and systems.</p> <p>In 286 page of my textbook, it says that as T becomes arbitrarily large the original periodic square wave approaches a rectangular pulse. Also it says that all that remains in the time domain is an aperiodic signal corresponding to one period of the square wave. (textbook: siganls and systems sencond edition, author: oppenheim)</p> <p>I have a difficulty understanding this.. I can't connect this idea with fourier transform.</p> <p>I suggest a link <a href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-003-signals-and-systems-fall-2011/lecture-videos-and-slides/MIT6_003F11_lec16.pdf" rel="nofollow">http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-003-signals-and-systems-fall-2011/lecture-videos-and-slides/MIT6_003F11_lec16.pdf</a></p>
<p>The idea is that a <a href="http://en.wikipedia.org/wiki/Fourier_series" rel="nofollow">Fourier series</a> is only defined for periodic signals. In the discussion in the linked slides, the author is considering a rectangular pulse train with period $T$. That is, a pulse of width $2S$ repeats periodically with a spacing of $T$ between them. The pulses are therefore centered at:</p> <p>$$ [\ldots, -2T, T, 0, T, 2T, \ldots ] $$ </p> <p>Now, consider what happens as $T \to \infty$: in the limit, the only pulse that remains is the one centered at zero; the others are infinitely far away. When the author makes the claim that:</p> <p>$$ \lim_{T\to \infty} T a_k = E(\omega) $$</p> <p>He or she is trying to show that the <a href="http://en.wikipedia.org/wiki/Fourier_transform" rel="nofollow">Fourier transform</a>, which is defined for suitably well-formed aperiodic signals, can be thought of as the Fourier series of that signal (which typically wouldn't be defined since the signal is not periodic) in the limiting case of an infinite period. Stated a little differently, you can in some way think of an aperiodic signal as a periodic signal with infinite period.</p> <p>The multiplication by $T$ in the limit is to account for the differences in definition between the Fourier series and Fourier transform: the series representation typically has a factor of $\frac{1}{T}$, while the transform does not. I don't know that there is a lot of insight to be gained via this analysis, but it shows that the series and transform representations are intimately related.</p>
68
Fourier transform
Calculate maximum of filter kernel
https://dsp.stackexchange.com/questions/9124/calculate-maximum-of-filter-kernel
<p>I'm sure there must be an easy way to do this, but given the Fourier transform of an isotropic filter kernel, $\hat{f}(\mathbf{u}) = \mathcal{F}f(\mathbf{z})$, can one calculate the value of the kernel at $\mathbf{z} = 0$?</p>
<p>Since $$f(\mathbf{z})=\int_{\mathbf{R}^n}\hat{f}(\mathbf{u})e^{2\pi i\mathbf{z}\cdot\mathbf{u}}\;d\mathbf{u}$$</p> <p>$$f(\mathbf{0})=\int_{\mathbf{R}^n}\hat{f}(\mathbf{u})\;d\mathbf{u}$$</p> <p>So you simply integrate (or sum in the discrete case) over $\hat{f}(\mathbf{u})$. </p>
69
Fourier transform
Frequency Axis problem in a DTFT
https://dsp.stackexchange.com/questions/13930/frequency-axis-problem-in-a-dtft
<p>I have a doubt related to calculating the Discrete Time Fourier Transform (DTFT) by hand. Specifically in how calculate the frequency axis of the spectrum. My signal has N values and was sampled at FS Hz, the spectrum would have N entries too (where N/2 values are a mirror of the other half). The maximum representable frequency is FS/2 (by the Nyquist theorem), that means that I have to multiply each entry by FS/N, so for the last entry (N/2) I have this:</p> <pre><code>(FS/N) * (N/2) = FS/2 </code></pre> <p>But when I sample at a higher FS Hz the spectrum is shifted. Think for instance in this function:</p> <pre><code>x = cos(2*pi*f0*t) </code></pre> <p>Where "f0 = 1/T", T is the period of the cosine and "t" means each entry of the time axis. Then the spectrum is two pulses in "-f0" and "f0". But doing this in python:</p> <pre><code>f = range(-N/2,N/2) f = [float(FS)/(float(N)) * i for i in f] </code></pre> <p>And sampling at a higher FS, then the pulses are shifted. But the correct behavior is that the pulses remain in the same location ("-f0" and "f0"), because the cosine function period didn't change. Am I doing something wrong?</p> <p>Thanks in advance ;)</p> <p>PD: I know that increasing the sampling rate would increase the density of the spectrum and the time signal too, so N is going bigger automatically, because I would have more samples per second.</p>
<p>Generally I like to compose my sinusoids using this format:</p> <pre><code>x = cos(2*pi*f/fs*(0:num_samps-1)) </code></pre> <p>Depending on the FFT routine you will use it will provide either the onesided or twosided result. Calculating the domain of the frequency axis is as follows. It should be noted that num_samps and NFFT may not be the same.</p> <pre><code>% Nfft - FFT size % Fs - Sampling frequency in Hz % oneside - flag indicating FFT results will be onesided or two sided % Compute the frequency vector if oneside == true f = fs * (0:Nfft/2).'./Nfft; else f = fs * (-Nfft/2:Nfft/2-1).'./Nfft; end </code></pre> <p>Have a look at this <a href="http://www.rssd.esa.int/SP/LISAPATHFINDER/docs/Data_Analysis/GH_FFT.pdf" rel="nofollow">paper</a> for a nice explanation on spectrum estimation.</p>
70
Fourier transform
Discrete Fourier Transform by hand
https://dsp.stackexchange.com/questions/18461/discrete-fourier-transform-by-hand
<p>I have an assignment where I'm given the DFT of a sequence $x[n]$ as $X[k]=\{4,3,2,1,0,1,2,3\}$ and also $$y[n] = \left\{ \begin{array}[cc] xx[n/2] &amp; \text{if n is even} \\ 0 &amp; \text{otherwise} \end{array} \right. $$</p> <p>and I'm supposed to find and sketch the DFT of $y[n]$.</p> <p>So $y[n] = \{x[0], 0, x[1], 0 ... x[7], 0\}$ and it's not complicated to find $Y[k]$ if we know $x[n]$</p> <p>I know how to use the definition of the DFT and IDFT to calculate $x[n]$ but it's a tedious task to do by hand, especially when the sequence is longer than a few items. Is there a quicker way to calculate the DFT and IDFT by hand without using a program like Matlab?</p>
<p>If you use the DFT formula, you get:<br> $$ Y[k] = \sum_{n=0}^{2N-1}y[n]e^{\frac{-2\pi k n}{2N}} $$ Now, substituting the definition of $y[n]$ you get:<br> $$ Y[k] = \sum_{n=0}^{N-1}x[n]e^{\frac{-2\pi k (2n)}{2N}} = \sum_{n=0}^{N-1}x[n]e^{\frac{-2\pi k n}{N}} $$<br> So, for $0\leq k &lt; N$ you get that $$ Y[k] = X[k]$$ and for $k\geq N$ you get $$ Y[k] = \sum_{n=0}^{N-1}x[n]e^{\frac{-2\pi k n}{N}} = \sum_{n=0}^{N-1}x[n]e^{\frac{-2\pi (k-N) n}{N}} = Y[k-N] $$<br> Therfore, $$ Y[k] = \begin{cases}X[k] &amp; 0\leq k &lt; N \\ X[k-N] &amp; k\geq N\end{cases} $$ or $X[k]=\{4,3,2,1,0,1,2,3,4,3,2,1,0,1,2,3\}$</p>
71
Fourier transform
Inverse Fourier transform of complex exponential with frequency dependent shift
https://dsp.stackexchange.com/questions/74642/inverse-fourier-transform-of-complex-exponential-with-frequency-dependent-shift
<p>In the case of a constant delay <span class="math-container">$\tau$</span>, we have the following equality:</p> <p><span class="math-container">$$\begin{align}\mathcal{F^{-1}}\left\{e^{-j\omega \tau}\right\}=\delta(t-\tau)\end{align}$$</span></p> <p>If the delay is frequency dependent <span class="math-container">$\tau(\omega)$</span>, can <span class="math-container">${F^{-1}}\left\{e^{-j\omega \tau(f)}\right\}$</span> be expressed as a sum of diracs?</p>
<p>In the general case: no.</p> <p>For the (inverse) Fourier transform of a function to be composed of a sum of countable diracs (i.e. to be discrete), the function needs to be periodic.</p> <p>Your <span class="math-container">$e^{-j\omega\tau(f)}$</span> is not periodic (it <em>can</em> be periodic, if <span class="math-container">$\omega\tau(f)$</span> happen to be periodic with period rationally related to <span class="math-container">$2\pi$</span>, but that's a pretty special case). Therefore, its inverse Fourier transform is continuous, and can't be represented by a sum of diracs.</p>
72
Fourier transform
Vector parameters in uncountably infinite-dimensional spaces
https://dsp.stackexchange.com/questions/43154/vector-parameters-in-uncountably-infinite-dimensional-spaces
<p>My question was, in an uncountably infinite-dimensional vector spaces, how to represent a vector by a list of parameters, as we do in finite-dimensional spaces? I was assuming that if we can not express a vector as a list of discrete parameters, we have a big issue...but during the writting up of this question, it seems that there is no big issue, the parameters just change to a function and the sum changes to integration.</p> <p>But I am not sure if my reasoning below is correct, so I still post it below, please correct me if there are something wrong:</p> <p>In a finite-dimensional vector space $\Omega$, each vector (or point) $\mathbf{v}$ is represented as a list of numbers $c_i, i\in\mathbb{N}$, which can be seens as the parameters or coefficients of the vector, in the sense that the vector is the sum of the products of the parameters with the respective base $e_i$:</p> <p>$$\mathbf{v}=\sum_{i=1}^{n} c_i e_i$$</p> <p>When $e_i=x^i$, then $\Omega$ is a finite-dimensional polynomial function space.</p> <p>Now, if the dimension is not finite, it seems there are two possibilities:</p> <ol> <li>countably infinite dimensional</li> <li>uncountably infinite dimensional, e.g. $e_i=x^i, i\in \mathbb{R}$</li> </ol> <p>For the first case, the vector $\mathbf{v}$ in the polynomial function space can be expressed similarly:</p> <p>$$\mathbf{v}=\sum_{i=1}^{\infty} c_i e_i$$ and $\mathbf{v}$ can also be represented by its parameters, i.e., a list of infinite members, as $c_i, i\in\mathbb{N}$.</p> <p>But for the 2nd case, it seems it's not possible to represent $\mathbf{v}$ by a list of discrete parameters any more...the parameters is itself a continuous function $c:\mathbb{R}\rightarrow\mathbb{R}$. In this case, the vector $\mathbf{v}$ should be expressed as an integration in terms of its parameters and related base:</p> <p>$$\mathbf{v}=\int_{-\infty}^{\infty} c(x) e(x)\mathrm{d}x$$</p> <p>My question was from Fourier transform. Now with the understanding above, I have the following: loosely speaking, all functions (who's Fourier Fransform exist) form an uncountably-infinite-dimensional vector space, with uncountably-infinite number of basis $e^{j\omega t}, \omega\in\mathbb{R}$. Each function's parameter in this space is a function $X(\omega)$, which is defined by the Fourier Transform formula:</p> <p>$$X(\omega)=\int_{-\infty}^{\infty} f(t) e^{-j\omega t}\mathrm{d}t$$ The inverse fourier transform formular is just the way to express the vector $f(t)$ in terms of its parameters and related base:</p> <p>$$f(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty} X(\omega) e^{j\omega t}\mathrm{d}\omega$$</p>
73
Fourier transform
In Fourier transforms, can momentum space be analogized to frequency, and position space be analogized to wavelength?
https://dsp.stackexchange.com/questions/157/in-fourier-transforms-can-momentum-space-be-analogized-to-frequency-and-positi
<p>We know that in quantum mechanics, momentum space is the fourier transform of position space (and vice versa)</p> <p>And also, in time-series analysis, that frequency (of cycles) is the fourier transform of the distribution of all cycle lengths.</p> <p>What about electromagnetic radiation? Is the distribution of frequencies the Fourier transform of the distribution of wavelengths?</p> <p>Is it physically feasible to think of a distribution of positions (each position value with a certain count), and then to take a fourier transform of that, and end up with a distribution of momentum values? Even in JPG compression, you have frequency and position (each position value has a certain count that corresponds to the color value on a scale of $0$ to $255$)</p>
<p>The notions of position and momentum are not fundamental to the uncertainty principle, but the fact that position and momentum are analogous to instantaneous time and instantaneous frequency is. There is no necessity to translate the spatial domain of an image and its fourier representation in terms of position and momentum. The notion of frequency in this case expresses how fast an image changes or where one finds sharp discontinuities/edge like structures in an image. </p> <p>This question might be motivated by a misunderstanding of the origins of the Heisenberg Uncertainty principle. The basis of any uncertainty princple is time-frequency uncertainty, i.e. given any two functions where one of them can be expressed in terms of the fourier transform of the other, both cannot be localized in their respective domains. See <a href="http://www-stat.stanford.edu/~donoho/Reports/Oldies/UPSR.pdf" rel="nofollow" title="Uncertainty Principles and Signal Recovery">1</a> for other types of discrete-time uncertainty principles that don't have the classical time-frequency interpretation. </p>
74
Fourier transform
What happens if we change the limits of integral in Fourier transform?
https://dsp.stackexchange.com/questions/6282/what-happens-if-we-change-the-limits-of-integral-in-fourier-transform
<p>By definition of Fourier transform</p> <p>$$X(\omega)=\int_{-\infty}^\infty x(t) e^{-j\omega t} dt $$</p> <p>Now what will happen to the answer of transform for example in case of $x(t)= \cos(\omega_0 t)$ if limit is $0$ to $A$ instead of $-\infty$ to $\infty$? </p> <p>For $x(t)=\cos(\omega_0 t)$ its fourier transform is given by $ X(\omega)= \pi[\delta(\omega-\omega_0) + \delta(\omega+\omega_o)]$</p> <p>so if the limit is changed will it effect the answer?</p>
<p>Yes, it will affect the answer. What you're suggesting is known as the short-time Fourier transform. In the sinusoidal case that you proposed, you will observe spectral leakage, as the truncation of the integral limits is equivalent to multiplication of the sinusoid by a rectangular window function. This multiplication in the time domain maps to convolution in the frequency domain. The Fourier transform of a rectangular window is a sinc function, so the convolution will yield two sinc functions centered at the locations of the impulses in your original answer. </p>
75
Fourier transform
Unit Impulse funciton FT
https://dsp.stackexchange.com/questions/17675/unit-impulse-funciton-ft
<p>How is FT of $ \delta $(t) equal to 1 ? Normal FT gives the result $\infty$. Can someone please explain? I did the normal integration and substituted the limits. </p> <p>Is it because $ \delta $(t) is a unit impulse function so as it's height is large it's width is very small so no matter what the FT will always be equal to 1 ? (I'm just trying to figure out the logic)</p>
<p>If follows directly from the definition of the Dirac delta distribution. It is defined so that</p> <p>$$\int_\mathbb{R} \delta(x) f(x) dx := f(0)$$</p> <p>for any test function $f(x)$. In other words, the Dirac distribution is the generator of the linear functional that extracts a single function value.</p> <p>With this definition the Fourier transform of the Dirac distribution is simply: $$\int_\mathbb{R} \delta(t) \exp(-2\pi i \omega t) dt=\exp(-2\pi i \omega\cdot 0)=\exp(0)=1$$</p> <p>It doesn't make much sense to say the Dirac distribution is infinitely high or infinitely narrow. Just use the definition given above and apply it. If you really need to, you can understand the distribution as the limit of a sequence of certain functions, but it's not a function itself. And specifically it doesn't have a graph.</p>
76
Fourier transform
Multi-Time Window FFT
https://dsp.stackexchange.com/questions/27165/multi-time-window-fft
<p>One can achieve better resolution results by taking FFT of different sizes of the input signal. FFT size decreases as frequency increases, i.e. longer FFT length for lower frequencies and shorter FFT length for higher frequencies. I have tried to find papers on this topic but did not find any so far. Rational Acoustics has few brochures where it mentions MTW - Multi-Time Window FFT, but there is no mathematics behind it. Anyone can help me with the answer about underlying mathematics or some code implementation (C++, C, or Java)? In other words, how I can apply longer FFT, in my software, for lower frequencies and shorter FFT for higher frequencies to get uniform resolution as a result?</p>
<p>If your goal is to plot a magnitude spectrum with the frequency axis on a log scale, but with a roughly even visual resolution along that axis, then a single FFT might provide too low a density of plot points (without interpolation) at low frequencies, and more plot points than can be plotted on a line (without averaging) at high frequencies for a given print or pixel display resolution or smoothness. If you use different FFT lengths for different octaves or sub-octaves in frequency, and select subsets of the results from each FFT, then you can maintain a lower delta in density of FFT result points when plotted on such a log scale. How many FFTs you might want to use depends on the maximum variance you want in log frequency resolution of the final joined plot.</p> <p>However, since the FFTs are of different lengths, then they are for different sets of data. A sequence of longer FFT windows can be done with larger offsets if more time resolution isn't needed at low frequencies. You will have the problem of how to join or blend all these different FFT results (usually subset segments of the results) so that the magnitude response might be uniform across FFT boundaries. In total, you also end up with an overdetermined set of FFT results (but so will highly overlapped windows).</p> <p>Even better results (in terms of even log frequency plot resolution) might be obtained by using some form of wavelet transform (a Morlet or Gabor wavelet or constant-Q transform, for example) instead of a bunch of semi-redundant FFTs. </p> <p>But libraries for optimized FFTs might be more available on some platforms. Each basis vector of a windowed FFT in a "multi-time" set can be considered a non-optimally-sized wavelet. So, in some ways, using these so-called "multi-time window" FFTs is a "poor man's" wavelet transform. But some of the many FFT results might be useful for other forms of more traditional DFT analysis of the signal input (in conjunction with the log plot), thus serving a dual purpose.</p> <p>A partial psycho acoustic justification for increasing FFT size at lower frequencies (or using wavelets) is that, over a certain mid-frequency range, the human ear/brain takes a length of time to determine frequency or pitch of a sound roughly proportional to the period of the frequency.</p>
77
Fourier transform
Question about ramp filter used in filtered backprojection
https://dsp.stackexchange.com/questions/34424/question-about-ramp-filter-used-in-filtered-backprojection
<p>Question is this. First, a ramp filter (in frequency domain) is defined by $H(Q)=|Q|$. What are the responses of a ramp filter to (1) a constant function $f(r)=c$ and (2) a sinusoid function $f(r)=\sin(wr)$? What does the response mean? Following is my work. </p> <p>My work: </p> <ol> <li><p>First, take fourier transform of a function $f(r)=c$. It is $\int_{-\infty}^{\infty}f(r)e^{-2i\pi rQ}dr=c\delta(Q)$. Then multiply ramp filter and take inverse fourier transform. It is $\int_{-\infty}^{\infty}c\delta(Q)|Q|e^{2i\pi irQ}dQ=0$??</p></li> <li><p>Similarly, $\int_{-\infty}^{\infty}\sin(wr)e^{-2i\pi rQ}dr=\frac{\delta(Q-w/2\pi)-\delta(Q+w/2\pi)}{2i}$. So Applying the ramp filter and i.f.t gives $\frac{w(e^{iwr}-e^{-iwr})}{4i\pi}=\frac{w\sin(wr)}{2\pi}$. </p></li> </ol> <p>It this right?</p>
<p>To see if your math is correct, it is useful to first understand what is, in general, the effect of a filter on a signal, and then see if you can predict what the theoretical result should look like.</p> <p>If a filter has frequency response $H(Q)$, this means that its response to an input $e^{j2\pi Q_0 r}$ is the signal $H(Q_0)e^{j2\pi Q_0 r}$. In other words, a sinusoidal input of frequency $Q_0$ produces an output of the same frequency, but with amplitude $|H(Q_0)|$ and phase $\angle H(Q_0)$.</p> <p>In your first question, the input has frequency $Q_0=0$. The filter's response at that frequency is $|Q_0|=0$. Then, the filter's output will be 0: frequency $Q_0=0$ is completely absorbed by the filter and it does not appear at the output.</p> <p>In your second question, the input has frequency $Q_0=w/2\pi$. The filter's response at that frequency is $|Q_0|=w/2\pi$. The output, then, should be $\frac{w}{2\pi}\sin(wr)$.</p> <p>As you can see, following this line of reasoning we obtain the same results as you did by just solving the equations. This corroboration should give you confidence that your results are correct, and most importantly, you can understand what the filter is doing physically.</p>
78
Fourier transform
How to apply an FFT
https://dsp.stackexchange.com/questions/41870/how-to-apply-an-fft
<p>Okay, round 2.</p> <p>The issue I am having with implementing FFT is that different implementations require passing as arguments different types of data. From the WAV file you obtain samples of the amplitude recorded at the sample rate. </p> <p>As an example, the NAudio library takes an array of complex numbers as an argument: <a href="http://naudio.codeplex.com/SourceControl/latest#NAudio/Dsp/FastFourierTransform.cs" rel="nofollow noreferrer">source code</a></p> <pre><code>public static void FFT(bool forward, int m, Complex[] data) </code></pre> <p>On the other hand, the <a href="https://www.codeproject.com/Articles/20025/Sound-visualizer-in-C" rel="nofollow noreferrer">source code</a> for another implementation simply takes an array of doubles:</p> <pre><code>static public double[] FFTDb(ref double[] x) </code></pre> <p>My questions are, </p> <ol> <li>Why are they using different arguments -- is it simply a preference or some other factor? </li> <li>How do I go from the samples in the WAV data to the form they are asking for? Do I simply cast the 16-bit integers to double? Do I zero out the imaginary part?</li> </ol> <p>As for the output of the FFT function, I'm left with, in the case of the former, the original array modified by the FFT or, in the case of the latter, an array of doubles. From my understanding each index in the output represents a range of frequencies depending on the sampling rate and the time resolution (number of samples passed).</p> <ul> <li><p>Am I right in concluding I simply find the magnitude of that index to determine the presence of that frequency range over the time interval of samples?</p></li> <li><p>Also, how do channels factor in to all of this? Do you separate the channels and run the FFT on each channel? Do you combine the channels after words? Do you examine them independently?</p></li> </ul>
<p>The data section of a WAVE file is often an array of 16-bit signed integers. You may need to convert each element of that array from an integer into a floating-point double, and put that converted value into the real component of an element of a complex array, in order to use many common floating point FFTs.</p> <p>Many FFT implementations requires a complex input vector. If you have strictly real data (for instance from a WAVE file), the imaginary component of every complex element will be zero (e.g. set it to zero if needed).</p> <p>Some FFT implementations only take real data (as a convenience), so you just give them a real data array, as they probably invisibly internally add any imaginary components of zero needed.</p> <p>Note that if the FFT provides a complex output, then you will need to compute the magnitude of each complex element (square root of the sum of the squares, etc.) in the result array.</p>
79
Fourier transform
Understanding the meaning of amplitude in FFT
https://dsp.stackexchange.com/questions/41988/understanding-the-meaning-of-amplitude-in-fft
<p>I am recording data with a magnetometer of the background magnetic field in a building. I have applied the FFT algorithm to the data in order to look for the frequencies that appear in it. I would like to use this in order to identify (or at least make an educated guess) of the sources of the disturbances that I observe.</p> <p>My question is: What is the meaning that I can attribute to the amplitude that I obtain from the FFT algorithm? Is there some unit that can be ascribed to it?</p> <p>Looking at the formula for the continuous fourier transform (which I took from Wolfram Mathworld) : \begin{align} f(\nu)=\int\limits_{-\infty}^{+\infty}f(t)e^{-2\pi i \nu t} \mathrm{d}t \end{align} I do not really know how to accomodate the dimension of Tesla in there.</p> <p>Thank you</p>
<p>The continuous-time Fourier transform of a function <span class="math-container">$f(t)$</span> is in essence an integration of <span class="math-container">$f(t)$</span> multiplied with a complex exponential kernel: <span class="math-container">$$ F(\omega)=\int_{-\infty}^{\infty} f(t) e^{-j\omega t} dt \tag{1}$$</span></p> <p>Since the exponential function is unitless, the unit of the Fourier integral will be the multiplication of the units of the function <span class="math-container">$f(t)$</span> and the differential <span class="math-container">$dt$</span>.</p> <p>Assuming that the function <span class="math-container">$f(t)$</span> had a unit of <em>micro Tesla</em>, and its argument <span class="math-container">$t$</span> is time (in seconds), then the unit of <span class="math-container">$dt$</span> will be <em>seconds</em>. As a consequence, the unit of the Fourier transform, <span class="math-container">$F(\omega)$</span>, will be <strong>micro Tesla second</strong> <span class="math-container">$$\mu T \cdot s \tag{2}$$</span></p> <p>However, what you actually compute is the discrete-time Fourier transform, <span class="math-container">$F(e^{j\omega})$</span>, of the samples <span class="math-container">$f[n]= f(nT_s)$</span> of the function <span class="math-container">$f(t)$</span>, via the summation: <span class="math-container">$$F(e^{j\omega}) = \sum_{n=-\infty}^{\infty} f[n] e^{-j\omega n} \tag{3}$$</span> where <span class="math-container">$T_s$</span> is the sampling period in seconds.</p> <p>Furthermore, instead of the continuous-argument function <span class="math-container">$F(e^{j\omega})$</span>, you will compute its samples <span class="math-container">$F[k]$</span></p> <p><span class="math-container">$$ F[k] = \sum_{n=0}^{N-1} f[n] e^{-j \frac{2\pi}{N} n k} \tag{4}$$</span></p> <p>through a DFT (discrete Fourier transform) of the samples <span class="math-container">$f[n]$</span> of length <span class="math-container">$N$</span>, possibly implemented with an FFT algorithm.</p> <p>The unit of the samples <span class="math-container">$f[n]$</span> is the same as the unit of <span class="math-container">$f(t)$</span>, making the unit of <span class="math-container">$F(e^{j\omega})$</span> and <span class="math-container">$F[k]$</span> as <em>micro Tesla</em>. Therefore the unit of the FFT samples <span class="math-container">$F[k]$</span> of <span class="math-container">$F(e^{j\omega})$</span> will be <strong>micro Tesla</strong>.</p> <p>Note that there is an (implicit) amplitude <a href="https://dsp.stackexchange.com/questions/41835/signal-amplitude-to-fft-amplitude/41849#41849">scaling</a> by <span class="math-container">$1/T_s$</span> in the computed DFT samples <span class="math-container">$F[k]$</span>, and when you want to display the continuous-time Fourier transform <span class="math-container">$F(\omega)$</span> from the samples <span class="math-container">$F[k]$</span>, you multiply them with <span class="math-container">$T_s$</span>, which corrects not only the amplitude scaling, but also the unit of it, by making it <strong>micro Tesla second</strong> as in (2).</p>
80
Fourier transform
What happens with signal in frequency spectrum when it is time shifted in time spectrum?
https://dsp.stackexchange.com/questions/17009/what-happens-with-signal-in-frequency-spectrum-when-it-is-time-shifted-in-time-s
<p>I have some trouble to understand what is going on with signal in frequency spectrum when it is time shifted in time spectrum.</p> <p>I am hoping that somebody will help me to understand that.</p> <p>Thanks you very much.</p>
<p>Each frequency in the FT of a time shifted waveform is rotated in phase by an amount proportional to the frequency and proportional to the amount of time shift.</p> <p>If you delay a pure sinusoid by 25% of its period, it's phase referenced to any fixed point in time will change by pi/2 radians. Delay a slightly higher frequency sinusoid by the same amount of absolute time, and it's phase will change more. So the phase change will change with frequency.</p> <p>If looking at a 3d plot of the FT of a time shifted signal, it looks like taking the FT before the time shift and twisting it. The twist will be linear, e.g. proportional to frequency. The more time shift, the greater the amount twist per unit of frequency (or twist revolutions per graph width).</p>
81
Fourier transform
Why is signum function used to calculate Fourier transform of unit step function
https://dsp.stackexchange.com/questions/26406/why-is-signum-function-used-to-calculate-fourier-transform-of-unit-step-function
<p>I read in a standard textbook that the Fourier transform of unit impulse function is calculated with the help of approximations and signum function as the integration of unit impulse does not converge. What's so special about signum function that it is used to calculate Fourier transform? I tried to find out an approximation as:</p> <p>$$ \lim_{ a \rightarrow 0 } \int_{-\infty}^{+\infty} e^{-at} u(t) e^{-j\omega t} dt $$</p> <p>But I am getting wrong result. Why is this so?</p>
<p>If somebody you trust told you that the Fourier transform of the sign function is given by</p> <p>$$\mathcal{F}\{\text{sgn}(t)\}=\frac{2}{j\omega}\tag{1}$$</p> <p>you could of course use this information to compute the Fourier transform of the unit step $u(t)$. Using</p> <p>$$u(t)=\frac12(1+\text{sgn}(t))\tag{2}$$</p> <p>(as pointed out by Peter K. in a comment), you get</p> <p>$$\mathcal{F}\{u(t)\}=\frac12\left(\mathcal{F}\{1\}+\mathcal{F}\{\text{sgn}(t)\}\right)=\pi\delta(\omega)+\frac{1}{j\omega}\tag{3}$$</p> <p>However, you don't <em>need</em> the sign function to compute the Fourier transform of the step function. As suggested in your question, using the function $e^{-at}u(t)$ and taking the limit $a\rightarrow 0^+$ will also result in the expression given in $(3)$.</p> <p>You can see this as follows. The Fourier transform of $e^{-at}u(t)$, $a&gt;0$, is given by</p> <p>$$\int_0^{\infty}e^{-at}e^{-j\omega t}dt=\frac{1}{a+j\omega}\tag{4}$$</p> <p>Taking the limit $a\rightarrow 0^+$ appears to give $1/j\omega$, but this is only valid for $\omega\neq 0$. Splitting the result $(4)$ in its real and imaginary part gives</p> <p>$$\frac{1}{a+j\omega}=\frac{a}{a^2+\omega^2}+\frac{\omega}{j(a^2+\omega^2)}\tag{5}$$</p> <p>The real part of $(5)$ is known as ($\pi$ times) a <a href="https://en.wikipedia.org/wiki/Dirac_delta_function#Representations_of_the_delta_function" rel="noreferrer">nascent delta function</a>. It has the same form as the <a href="https://en.wikipedia.org/wiki/Dirac_delta_function#Semigroups" rel="noreferrer">Poisson kernel</a>, which in the limit becomes a Dirac delta impulse. So for $a\rightarrow 0^+$ the limit of $(5)$, and hence of $(4)$, is actually given by</p> <p>$$\lim_{a\rightarrow 0^+}\frac{1}{a+j\omega}=\pi\delta(\omega)+\frac{1}{j\omega}\tag{6}$$</p> <p>which equals the expression in $(3)$.</p>
82
Fourier transform
How the Fourier transform of a cosine signal is existed?
https://dsp.stackexchange.com/questions/33785/how-the-fourier-transform-of-a-cosine-signal-is-existed
<p>As I know, if an aperiodic continuous-time signal be absolutely integrable, i.e. </p> <p>$$\int\limits_{-\infty}^\infty \vert x(t) \vert \ dt \ &lt; \ \infty $$</p> <p>its Fourier transform is existed. </p> <p>Also, the Fourier transform of $\cos(\omega_0 t)$ is $\pi(\delta(\omega-\omega_0)+\delta(\omega+\omega_0))$.</p> <p>Now, my question is that: how the Fourier transform of a cosine signal is the above relation, while, this signal is not integrable !?</p>
<p>The Fourier Transform is defined only for functions that are absolutely integrable or in other words $f\in\mathcal{L}_1$, where $\mathcal{L}_1$ is the set of all absolutely integrable functions. </p> <p>If you want to be mathematically rigorous, then you should assume that the Fourier transform of the cosine does not exist, as it is not integrable. Apart from it, the Dirac's delta does not fulfill the axioms of a function in modern mathematics, so you cannot say that a Fourier Transform of a function is something that is not a function. Instead of that, the Dirac's delta is defined as a measure that can be used in the Lebesgue integral, that satisfies certain axioms(<a href="https://en.wikipedia.org/wiki/Dirac_delta_function" rel="nofollow">https://en.wikipedia.org/wiki/Dirac_delta_function</a>). However, the use of the cosine is valid for the common modulation operation, and is mathematically consistent (meaning that its Fourier transform exist):</p> <p>$\int_{-\infty}^{\infty} |x(t)cos(2\pi t)e^{-i2\pi ft}|dt\leq \int_{-\infty}^{\infty} |x(t)|dt &lt; \infty $</p> <p>However, if you are just interested in the engineering or physics point of view, you can use the Dirac delta "function" as a trick that works as the measure in order to get the operations you are interested to do. For the sinc function and square integrable functions, a Fourier Transform over the $\mathcal{L}_2$ space, which is the space of square integrable functions, is defined so that you can get the Fourier Transforms for those functions(<a href="http://math.mit.edu/~jerison/103/handouts/fourierint1.13.pdf" rel="nofollow">http://math.mit.edu/~jerison/103/handouts/fourierint1.13.pdf</a>), but if you are just interested in the engineering point of view, it is not important to be so rigorous mathematically.</p>
83
Fourier transform
How to recover $f(t)$ from Fourier Transform of its absolute value $\mathcal{F}|f(t)|$?
https://dsp.stackexchange.com/questions/34782/how-to-recover-ft-from-fourier-transform-of-its-absolute-value-mathcalf
<p>Let the Fourier Transform of a real signal, $f(t)$, be $\mathcal{F}(\omega)$. And the FT of the absolute value of the same signal, $|f(t)|$, be $\mathcal{F}(u)$. </p> <p>Can $\mathcal{F}(w)$ be recovered from $\mathcal{F}(u)$?</p> <p>For instance, the FT of $a \cdot \cos(ft)$ returns a spectrum in which the frequency $f$ has amplitude $a$.</p> <p>Can $f$ and $a$ be recovered from the FT of $a \cdot \cos(ft)$?</p>
<p>I recently <a href="https://dsp.stackexchange.com/questions/34373/pilot-tone-frequency-doubling/34374#comment65236_34374">was pointed to</a> a very nice trick by Robert Bristow Johnson which possibly applies here too to demonstrate this "inability" of recovery. I thought I'd share it here, in addition to the accepted answer.</p> <p>The trick is to see $|x(n)|$ as $sgn(x(n)) \cdot x(n)$ where $sgn$ is a function that returns 1 for positive sign and -1 for negative sign. In the case of a sinusoid, this equals modulation of the sinusoid at some frequency $f_{sin}$ with a square waveform at the same frequency. </p> <p><a href="http://fourier.eng.hmc.edu/e101/lectures/handout3/node2.html" rel="nofollow noreferrer">Multiplication in the time domain is equivalent to convolution in the frequency domain</a>. The spectrum of a sinusoid is a spike at $\pm f_{sin}$. <a href="https://en.wikipedia.org/wiki/Square_wave" rel="nofollow noreferrer">The spectrum of a square wave</a> is a series of spikes starting at $f_{sin}$ and repeating at odd harmonics. Therefore, the spectrum of $|x(n)|$ when $x(n)$ is a sinusoid, is a shifted version of the square wave spectrum by $f_{sin}$. This gives us a component at double the $f_{sin}$ with a bit of DC. In other words, it does full rectification to the sinusoid and it now sounds at double the frequency. We will come back to this.</p> <p>To ask whether we could recover the $x(n)$ from the $\mathcal{F}(|x(n)|) = \mathcal{F}(x(n) \cdot sgn(x(n)))$ is to ask if there is a <a href="https://en.wikipedia.org/wiki/Monotonic_function" rel="nofollow noreferrer">monotonic function</a> that realises the mapping:</p> <p>$$\mathcal{F}(x(n)) = g\left(\mathcal{F}(x(n) \cdot sgn(x(n)))\right)$$</p> <p>Which if we take one step further, it becomes:</p> <p>$$\mathcal{F}(x(n)) = g\left(\mathcal{F}(x(n)) * \mathcal{F}(sgn(x(n)))\right)$$</p> <p>And now we are in trouble, because:</p> <p>$$\mathcal{F}(x(n)) * \mathcal{F}(sgn(x(n))) = \mathcal{F}(sgn(x(n))) * \mathcal{F}(x(n))$$</p> <p>Therefore, our $g$ would produce the same output for two different values, which is not the definition of a monotonic function. In other words, you can synthesize the same spectrum in more than one ways.</p> <p>Now, if you <strong>fix</strong> $x(n)$ to be a sinusoid, then you could say, I will deconvolve $x(n)$ and a square wave and I will recover the original signal <strong>provided that</strong> i could also fix the phase. It doesn't necessarily have to start from 0. But it doesn't matter, say you employ some iterative method and after a lot of prespiration you recover $x(n)$. You see here you only have two "actors" and you know both of them very well, so you can tell them appart easily.</p> <p><strong>BUT</strong>, in the general case of some $|x(n)|=sgn(x(n)) \cdot x(n)$, where you only have $|x(n)|$, you can't really tell what its $sgn(x(n))$ was before it was lost!</p> <p>It's like looking at a photograph where the camera is shooting a scene through a mirror. Can you tell, just by looking at the photograph, if the camera was looking at the real scene <strong>OR</strong> the real scene through a mirror? Can you recover the "truth"? The mirror here is effected by the modulation function.</p> <p>So, it is impossible to perform this recovery because it is the product of two components, one of which you have lost forever.</p> <p>Hope this helps.</p>
84
Fourier transform
Spectrum of windowed version of original continuous signal
https://dsp.stackexchange.com/questions/37548/spectrum-of-windowed-version-of-original-continuous-signal
<p>Suppose we have the complex signal $x(t)= \exp(j\omega_0 t)$. Using the properties of Fourier transform we can prove its CTFT is Dirac $\delta$ function. </p> <p>If any one ask me about the spectrum of $x(t)$ "Does $x(t)$ has continuous spectrum or discrete spectrum", my answer will be "The spectrum of $x(t)$ is discrete".</p> <p>Now, if I apply a rectangular window to the complex exponential $x(t)$ in the time domain and then take CTFT I will end up with $\mathrm{sinc}$ function. Now the spectrum of the windowed complex exponential is continuous and not discrete.Is this interpretation true? </p>
<p>If you don't understand the difference between the <a href="https://en.wikipedia.org/wiki/Fourier_transform" rel="nofollow noreferrer">Continuous Time Fourier Transform</a> (CTFT), the <a href="https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform" rel="nofollow noreferrer">Discrete Time Fourier Transform</a> (DTFT) and the <a href="https://en.wikipedia.org/wiki/Discrete_Fourier_transform" rel="nofollow noreferrer">Discrete Fourier Transform</a> (DFT), now would be a good time to read about them. The <em>very short</em> version is that the DTFT yields the (continuous-valued) spectrum of a sequence (i.e., a sampled signal). The DFT computation results in a sampled version of the DTFT. To apply the DFT requires a finite number of samples (i.e., a time-domain window) whereas such restrictions are not placed on the DTFT in general. On the other hand, the CTFT deals with continuous time signals. There is a lot more to all of this, and I recommend you read more. </p> <p>It is true that a complex-valued signal $x(t) = \exp\left(j \omega_0 t\right)$ and the Dirac delta function form a CTFT pair. I can agree with you that the spectrum of this signal is discrete (nonzero for a finite number of frequencies, in this case a single frequency).</p> <p>After applying a rectangular window in time to the complex-valued signal $x(t)$, the CTFT is a frequency-translated sinc function centered at $\omega_0$ with a lobe width inversely proportional to the size of the time domain window. This shows us that the result of truncating a time domain signal with infinite support in time and a discrete spectrum in frequency can lead to a new time domain signal with finite support and a continuous spectrum. So, the answer is <em>yes</em>, the interpretation given in the question is true.</p>
85
Fourier transform
Fourier transform of cosine to the power of 3
https://dsp.stackexchange.com/questions/6038/fourier-transform-of-cosine-to-the-power-of-3
<p>How can I find the Fourier transform of</p> <p>$$ f(x) = ( \cos(x) )^3$$</p> <p>I know that for $ g(x) = \cos(x) $</p> <p>$$\mathcal F \Big\{ g(x) \Big\} = \mathcal F \Big\{ \cos(x) \Big\} = \pi \Big [ \delta(w-\pi / 2) + \delta(w+\pi / 2) \Big ]$$</p> <p>But using this pair of Fourier transform how to obtain the $ F \Big\{ f(x) \Big\} $ ?? Is there a direct/simple way to do that?</p>
<p>One way would be to use the <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Power-reduction_formula" rel="noreferrer">power-reduction trigonometric identity</a>:</p> <p>$$ \cos^3(x) = \frac{3 \cos(x) + \cos(3x)}{4} $$</p> <p>Due to the linearity property of the Fourier transform, you can transform each term separately and take their weighted sum to get the transform of the entire expression. The relationship we will use (<a href="http://en.wikipedia.org/wiki/Fourier_transform#Distributions" rel="noreferrer">from line 304 here</a>) is:</p> <p>$$ \mathcal{F}\{\cos(ax)\} = \pi\left(\delta(\omega - a) + \delta(\omega + a)\right) $$</p> <p>Which assumes that you're using the non-unitary, angular frequency definition of the Fourier transform:</p> <p>$$ \mathcal{F}\{x(t)\} = X(\omega) = \int_{-\infty}^{\infty}x(t) e^{-j\omega t}dt $$</p> <p>This would yield:</p> <p>$$ \begin{align} \mathcal{F}\{\cos^3(x)\} &amp;= \frac 34 \mathcal{F}\{\cos(x)\} + \frac 14 \mathcal{F}\{\cos(3x)\} \\ &amp;= \frac{\pi}{4}\left(3 \delta(\omega - 1) + 3\delta(\omega + 1) + \delta(\omega - 3) + \delta(\omega + 3) \right) \end{align} $$</p>
86
Fourier transform
Why does the periodic signal in time always give a discrete frequency spectrum
https://dsp.stackexchange.com/questions/17060/why-does-the-periodic-signal-in-time-always-give-a-discrete-frequency-spectrum
<p>I would like to know, why does the periodic signal in time always give a discrete frequency spectrum in FT?</p> <p>I know the equations, but I simply dont understand why is it so.</p> <p>Thanks!</p>
<p>Here's an intuitive explanation if the convolution theorem is taken for granted:</p> <p>Since the time-domain signal is periodic, one can say that it can be built by "copying and pasting" the same block of signal every period: your periodic signal can be expressed as a little block of signal (spanning one period) <strong>convolved</strong> with a dirac comb.</p> <p>Thus, its Fourier transform will be the Fourier transform of the little block <strong>multiplied</strong> by the Fourier transform of a Dirac comb (which is another Dirac comb). Multiplying a continuous signal by a Dirac comb yields a discrete signal.</p> <p>The same reasoning is also true the other way round (discrete in time implies periodic in frequency).</p>
87
Fourier transform
Filtering and Fourier Transforming, does the order matter?
https://dsp.stackexchange.com/questions/22076/filtering-and-fourier-transforming-does-the-order-matter
<p>I have a signal $x(t)$. I want to find the Fourier Transform of it, $X(f)$, and then extract a narrow frequency range from $X(f)$ by use of a Band Pass Filter (BPF) in frequency domain.</p> <p>Can I instead filter $x(t)$ by using a BPF in time domain and then find the Fourier Transform of the filtered signal?</p> <p>I believe these two are equivalent.</p>
<p>There may be some slight differences due to the band-pass filtering of FFT results in the frequency domain being a circular convolution (with some wrap-around artifacts) rather than a pure linear convolution.</p> <p>If you filter first, starting in time before your FFT window, any windowing artifacts from any out-of-band spectrum will be reduced before leaking into the FFT filter's pass-band. Or alternatively, you could zero-pad the FFT window by the length of your filter's impulse response.</p>
88
Fourier transform
Repeated Fourier transform - what happens?
https://dsp.stackexchange.com/questions/31285/repeated-fourier-transform-what-happens
<p>I have a Fourier transformable complex function that is a function of independent real variable a. Now I take the Fourier transform of it, giving me a complex function of real variable b. Now I treat the resulting function as if it is in the original domain of a and again take Fourier transform of it - in stead of inverse Fourier transform as is usually done to get the original fucntion. May be I will do this repeatedly. What are relations between the original function and the transform function at various stages?</p>
<p>if you define the continuous Fourier Transform in a <a href="https://en.wikipedia.org/wiki/Unitary_operator" rel="nofollow">unitary</a> manner, my preferred unitary definition is</p> <p>$$ X(f) \triangleq \mathscr{F}\{ x(t) \} = \int\limits_{-\infty}^{+\infty} x(t) \, e^{-j 2 \pi f t} \ dt $$</p> <p>$$ x(t) \triangleq \mathscr{F}^{-1}\{ X(f) \} = \int\limits_{-\infty}^{+\infty} X(f) \, e^{+j 2 \pi f t} \ df $$</p> <p>then you can see a lotta symmetry and isomorphy in the forward and inverse transformation. in fact they are exchangeable since $-j$ and $+j$ both have equal claim to squaring to be $-1$. (i.e. if, in all of our textbooks and technical and math lit, every $j$ was replaced by $-j$ and vise versa, all of our theorems would be just as valid. the choice of which imaginary unit to go with which direction of Fourier transformation is a convention.)</p> <p>now, it's not hard to see where the <strong>duality theorem</strong> comes from. Given the above, then</p> <p>$$ x(-f) = \mathscr{F}\{ X(t) \} = \int\limits_{-\infty}^{+\infty} X(t) \, e^{-j 2 \pi f t} \ dt $$</p> <p>$$ X(-t) = \mathscr{F}^{-1}\{ x(f) \} = \int\limits_{-\infty}^{+\infty} x(f) \, e^{+j 2 \pi f t} \ df $$</p> <p>from that, it's not hard to see that</p> <p>$$ x(-t) = \mathscr{F} \Big\{ \mathscr{F}\{ x(t) \} \Big\} $$</p> <p>and that</p> <p>$$ x(t) = \mathscr{F} \Big\{ \mathscr{F}\{ x(-t) \} \Big\} $$</p> <p>so it's not hard to see that if $x(t)$ is even symmetry ($x(-t) = x(t)$) then transforming it twice gets you back to the original.</p> <p>and this</p> <p>$$ x(t) = \mathscr{F} \bigg\{\mathscr{F} \Big\{\mathscr{F} \big\{ \mathscr{F}\{ x(t) \} \big\}\Big\}\bigg\} $$</p> <p>so it's sorta like multiplying by $j$. do it four times and you wind up with the thing that you started with.</p> <p>you can use this fact to create an infinite number of Fourier transform pairs that are exactly equal to each other. all you have to do is create the Fourier transform three levels deep and add each to the original. each time you FT that, you get the same thing.</p> <p>there are two simple functions that i can think of that have themselves as their own FT. one is the Gaussian function and the other is the Dirac comb (both properly scaled).</p>
89
Fourier transform
Instantaneous frequency vs fourier frequency
https://dsp.stackexchange.com/questions/19469/instantaneous-frequency-vs-fourier-frequency
<p>Lets consider a pure sine signal at $\nu$ that is chopped using square pulses (like a burst mode on signal generators). My understanding is that instantaneous frequency is $\nu$ when oscillations are ON and 0 when they are OFF. On the other hand fourier spectrum is constant over time and contains also other frequencies, since it is not pure sine anymore. Is this correct? which one is used when calculating some frequency dependent physical quantity?</p>
<p>Yes, your understanding ist correct. Instantaneous frequency is the time derivative of the sine argument. As Robert mentions in his answer, this argument is not defined where there is no sine (or complex exponential) function but I think its reasonable to consider it a sine with amplitude zero and constant argument. The function you describe is defined sectionwise. In sections where the sine is "on" the time derivative of its angle is $\nu$, in sections where the sine is "off" the time derivative is zero. So the instantaneous frequency is a function of time.</p> <p>The Fourier transform is not the right tool to analyze the instantaneous frequency as a function of time. As you have realized the Fourier transform is constant in time. The FT of this special function is a shifted sinc function and thus contains other frequencies than $\nu$. </p> <p><strong>Update</strong> following your comment: 2 is correct. The output signal of a narrow bandpass filter with center frequency $\nu$ is not identical to the discussed "chopped" sine wave. The input signal has sharp transitions where it is forced to zero by the rectangular pulse train. These transitions are smoothed by the bandpass filter and you will see the dynamic behaviour of the filter in form of transients in the output signal where the input signal has sharp transitions. In other words: the bandpass filter can not "react" instantaneously to the sudden change of frequency because it has a memory.</p>
90
Fourier transform
Fourier Transform negative amplitude meaning
https://dsp.stackexchange.com/questions/52406/fourier-transform-negative-amplitude-meaning
<p>I am reading this example <a href="http://www.thefouriertransform.com/pairs/truncatedCosine.php" rel="nofollow noreferrer">http://www.thefouriertransform.com/pairs/truncatedCosine.php</a></p> <p>What does it mean to have some of the frequency components be negative in its amplitude ? I am not talking about the negative frequencies.</p>
<p>The fft returns complex values, to get the amplitude you need to take the abs( ). The real and imaginary portion tell you about the signals phase. Remember the fft is changing the basis by projecting your signal onto a complex sinusoid: <span class="math-container">$$e^{i \omega t} = \cos(\omega t) + i \sin(\omega t)$$</span></p> <p>and thus your signal is now a set of complex sinusoids which have some phase and amplitude. Think about the phase of a vector <span class="math-container">$v = [a, \, i\cdot b]$</span> on the complex plane and what this would mean. </p>
91
Fourier transform
Fourier transform is an isomorphism...but we don’t get when each frequency appears?
https://dsp.stackexchange.com/questions/62491/fourier-transform-is-an-isomorphism-but-we-don-t-get-when-each-frequency-appea
<p>Statistician here who wants to get some DSP knowledge for time series analysis.</p> <p>I’ve known for years that if we hit a function with a Fourier transform, we have an inverse Fourier transform that will recover the original function. However, doesn’t the interpretation of the Fourier transform in the frequency domain lack the time component? In other words, we can say that 100Hz appears in the signal with some intensity, but we can’t say if that appears at the beginning of the signal or the end. Those are very different signals to me, yet I’m supposed to be able to recover each by applying the inverse Fourier transform?</p> <p>There seems to be an inconsistency: we either lose the time information and can’t invert, or we retain the time information and can invert.</p> <p>What is the resolution to this apparent inconsistency?</p> <p>(I have a hunch that it has to do with the imaginary part of the Fourier transform, though I’m not sure how.)</p>
<p>It's true that taking the Fourier transform will leave you without any (visible) information on time and vice versa, but of course you don't lose any information, you just represent it in a way such that in one domain you only see time information, and in the other you only see frequency information.</p> <p>Take as an example the Fourier transform of a time-inverted (real-valued) function <span class="math-container">$x(-t)$</span>: its Fourier transform has the same magnitude as the Fourier transform of the original function <span class="math-container">$x(t)$</span>. The difference between these two Fourier transforms lies exclusively in the phase. So you're right to assume that timing information is encoded in the phase of the Fourier transform.</p> <p>There is no time localization in the Fourier transform because its basis functions are complex exponentials extending from <span class="math-container">$-\infty$</span> to <span class="math-container">$\infty$</span>. There are other transforms that will give you a certain degree of time <em>and</em> frequency localization, the most well-known of which is probably the <a href="https://en.wikipedia.org/wiki/Short-time_Fourier_transform" rel="noreferrer">Short-time Fourier transform</a>. Also take a look at <a href="https://dsp.stackexchange.com/q/17212/4298">this related question</a> and its answers.</p>
92
Fourier transform
Applying duality property to fourier transform of unit step function
https://dsp.stackexchange.com/questions/56388/applying-duality-property-to-fourier-transform-of-unit-step-function
<p>For Continuous time aperiodic signals, the duality property of Continuous Time Fourier Transform (CTFT) is following</p> <p><span class="math-container">$$\mathscr{F}\Big\{x(t)\Big\} = X(f), \qquad\text{then} \quad \mathscr{F}\Big\{X(t)\Big\} = x(-f)$$</span></p> <p>Now we know while Dirichlet conditions are not satisfied for unit step function <span class="math-container">$u(t)$</span>, so its CTFT analysis and synthesis cannot be done. However, we can still do it provided we are willing to accept occurrence of singularity functions like Dirac delta impulse in its Fourier transform equation.</p> <p>i.e.</p> <p><span class="math-container">$$\begin{align} \mathscr{F}\Big\{u(t)\Big\} &amp;= \mathscr{F}\Big\{\tfrac{1}{2} + \tfrac{1}{2}\operatorname{sgn}(t) \Big\} \\ &amp;= \frac{\delta(f)}{2} + \frac{1}{j2\pi f} \\ \end{align}$$</span></p> <p>where the signum function,</p> <p><span class="math-container">$$\operatorname{sgn}(t) \triangleq \begin{cases} -1 \qquad &amp; t&lt;0 \\ 0 \qquad &amp; t=0 \\ +1 \qquad &amp; t&gt;0 \\ \end{cases}$$</span></p> <p>However if I apply duality property to the above result, then I should get following:</p> <p><span class="math-container">$$\begin{align} \mathscr{F}\Big\{\frac{\delta(t)}{2} + \frac{1}{j2\pi t}\Big\} &amp;= u(-f) \\ &amp;= \tfrac{1}{2} + \tfrac{1}{2}\operatorname{sgn}(-f) \\ &amp;= \tfrac{1}{2} - \tfrac{1}{2}\operatorname{sgn}(f) \\ \end{align}$$</span></p> <p>However when I read at least some books on Fourier transform, I find that result is <span class="math-container">$u(f)$</span> and not <span class="math-container">$u(-f)$</span>. Question is why ?</p>
<p>The result you got is correct and it is also expected according to the two first formulas in your question. If <span class="math-container">$X(f)$</span> is the Fourier transform of <span class="math-container">$x(t)$</span>, then the Fourier transform of <span class="math-container">$X(t)$</span> equals <span class="math-container">$x(-f)$</span>. If <span class="math-container">$x(t)=u(t)$</span> is the unit step function then</p> <p><span class="math-container">$$X(f)=\frac12 \delta(f)+\frac{1}{j2\pi f}\tag{1}$$</span></p> <p>and the Fourier transform of <span class="math-container">$X(t)$</span> is given by</p> <p><span class="math-container">$$\mathcal{F}\left\{\frac12 \delta(t)+\frac{1}{j2\pi t}\right\}=\frac12+\frac{1}{2 j}\mathcal{F}\left\{\frac{1}{\pi t}\right\}\tag{2}$$</span></p> <p>where we recognize <span class="math-container">$1/\pi t$</span> as the impulse response of an ideal Hilbert transformer, the Fourier transform of which is given by</p> <p><span class="math-container">$$\mathcal{F}\left\{\frac{1}{\pi t}\right\}=-j\;\textrm{sgn}(f)\tag{3}$$</span></p> <p>Combining <span class="math-container">$(2)$</span> and <span class="math-container">$(3)$</span> gives</p> <p><span class="math-container">$$\mathcal{F}\left\{X(t)\right\}=\frac12\big(1-\textrm{sgn}(f)\big)=u(-f)\tag{4}$$</span></p> <p>just as expected.</p> <p>Maybe you can clarify which books say otherwise.</p>
93
Fourier transform
Fourier transform of a damped cosine wave with a linear frequency chirp
https://dsp.stackexchange.com/questions/59160/fourier-transform-of-a-damped-cosine-wave-with-a-linear-frequency-chirp
<p>I want to take the Fourier transform of the following transient signal, <span class="math-container">$$f(t) = e^{-t/\tau} \cos((\omega_0 + m t)t)$$</span>, where <span class="math-container">$m$</span> is some gradient parameter in units of <span class="math-container">$\rm{Hz}/s$</span>. I thought this would be quite straight forward -- although most of my approaches have been made using Mathematica -- which struggles to provide anything useful.</p> <p>I would have assumed the resultant function would have a Lorentzian-like peak profilein a similar way to if one takes the trivial Fourier transform of <span class="math-container">$$f(t) = e^{-t/\tau} \cos(\omega_0t)$$</span>. Does anyone have any ideas on how I can approach this, or, an alternative to Fourier transforming a damped sinusoidal with a linear (or even non linear) frequency, i.e. a chirp.</p>
<p>For Fourier Transform of the LFM chirp portion - you use the Principle of Stationary Phase (POSP). The POSP essentially says the main contribution in the Fourier Integral comes from the portion of where the derivative of the phase is zero - it assumes that the integral of the oscillating components cancel themselves out.</p> <p>Using the POSP - the Fourier transform of a LFM chirp is another LFM chirp in the frequency domain. The assumption is not very good in cases where you have a low time-bandwidth product.</p> <p>For the complete signal, you have to options:</p> <ol> <li>Evaluate the Fourier Transform of the magnitude portion and then convolve with the Fourier Transform of the LFM chirp pulse.</li> <li>Keep the magnitude in while you are doing the POSP evaluation. You have to check on the assumptions the POSP make. If I recall correctly - it is usually assumes a slowly varying amplitude window.</li> </ol> <p>I believe you can find the use of the POSP for LFM pulses in the signal analysis books by Papoulis, and also in "Digital Processing of Synthetic Aperture Radar Data" by Cummings and Wong.</p>
94
Fourier transform
Fourier transform of discrete time unit step function
https://dsp.stackexchange.com/questions/61903/fourier-transform-of-discrete-time-unit-step-function
<p>To obtain fourier transform of u[n], <code>u[n] - u[n-1] = delta[n]</code> , taking fourier transform of both sides of the equation results in : <code>U(w) - exp(-jw) U(w) = 1</code> , hence : <code>U(w) = 1/(1-exp(-jw))</code> which is wrong and the right answer has an extra term. Which step is wrong in this possible solution? I know the right proof of fourier transform of <code>u[n]</code>, my question is regarding the wrong part of this solution.</p>
<p>The DFT of a unit step response is <span class="math-container">$$U(\omega) = \frac{1}{1 - e^{-j \omega}} + \pi \delta(\omega)$$</span> Applying the shift property as you did will give: <span class="math-container">$$\mathcal{F}(u[n] - u[n-1]) = U(\omega) - U(\omega)e^{-j \omega} = \frac{1}{1 - e^{-j \omega}} + \pi \delta(\omega) - [\frac{1}{1 - e^{-j \omega}} + \pi \delta(\omega)]e^{-j \omega}$$</span> that is <span class="math-container">$$\mathcal{F}(u[n] - u[n-1]) = \frac{1- e^{-j \omega}}{1 - e^{-j \omega}} +\pi\delta(\omega)( 1 - e^{-j\omega}) = 1+\pi\delta(\omega)( 1 - e^{-j\omega})$$</span> The second term is always zero because for <span class="math-container">$\omega = 0$</span>, <span class="math-container">$1 - e^{-j\omega} = 0 $</span> and it's zero on any other point, so you get, <span class="math-container">$$\mathcal{F}(u[n] - u[n-1]) = 1 = \mathcal{F}(\delta[n])$$</span></p>
95
Fourier transform
Time shift and Phase Examples
https://dsp.stackexchange.com/questions/82329/time-shift-and-phase-examples
<p>Given are two cosines according to the following formula <span class="math-container">$x_i(t) = cos(2\pi f_i t)$</span> with <span class="math-container">$f_1 = 1Hz$</span> , <span class="math-container">$f_2 = 2Hz$</span> and <span class="math-container">$f_3 = 3Hz$</span> .</p> <p>The two cosines are delayed by <span class="math-container">$\tau=0.1s$</span> to yield <span class="math-container">$y_i(t) = cos(2\pi f_i(t-0.1s))$</span>. This corresponds to a phase shift and the delayed cosines can also be written as <span class="math-container">$y_i(t) = cos(2\pi f_it + \phi_i)$</span></p> <p>Calculate the phase shifts <span class="math-container">$\phi_i$</span> for each cosine and verify that this corresponds to Time Shift theorem of Fourier Transform.</p> <p>My work:</p> <p>I found online some formula that supposed to calculate the <span class="math-container">$\phi_i s$</span>. It is written like this <span class="math-container">$\phi_i=\tau *f *2\pi$</span> and calculated that <span class="math-container">$\phi_1 =\frac{2\pi}{10}$</span>, <span class="math-container">$\phi_2 =\frac{4\pi}{10}$</span> and <span class="math-container">$\phi_3 =\frac{6\pi}{10}$</span>.</p> <p>Time Shift Theorem say If the original function g(t) is shifted in time by a constant amount, it should have the same magnitude of the spectrum, G(f). That is, a time delay doesn't cause the frequency content of G(f) to change at all. This should make sense. Since the complex exponential always has a magnitude of 1, we see the time delay alters the phase of G(f) but not its magnitude.</p> <p>So the phase for these examples have changed but not the magnitude.</p> <p>First of all, are the calculations and the formula correct? Does my arguments make sense for the Time shift theorem in regards to these three examples?</p> <p>Could someone please explain what is the difference between the time delayed signal and the phase shifted signal?</p> <p>Any help is much appreciated! Thanks!</p>
<p>Any signal <span class="math-container">$x(t)$</span> can be time-shifted: simply calculate <span class="math-container">$x(t + \Delta t)$</span>.</p> <p>A sinusoid can also be phase-shifted. Consider the cosine signal with phase <span class="math-container">$\phi$</span>: <span class="math-container">$$x(t) = \cos(2\pi f_0 t + \phi).$$</span> Now, time shift it: <span class="math-container">$$x(t + \Delta t) = \cos(2\pi f_0 (t + \Delta t) + \phi) = \cos(2\pi f_0 t + 2\pi f_0 \Delta t + \phi).$$</span> The phase of this delayed cosine is <span class="math-container">$2\pi f_0 \Delta t + \phi$</span>. The takeaway here is: for periodic sinusoids, a time-shift has a direct and straightforward relationship with a phase shift, and vice-versa. This is also true for complex sinusoids <span class="math-container">$x(t) = \exp(j2\pi f_0 t + \phi)$</span>.</p> <p>The definition of phase for non-sinusoidal signals is not as simple as that of sinusoids. For example, many signals can be written in the form <span class="math-container">$A(t)e^{j\phi(t)}$</span> where <span class="math-container">$A(t) &gt; 0$</span> and their phase is defined as <span class="math-container">$\phi(t)$</span>. Here, a time shift of <span class="math-container">$\Delta t$</span> results in a new phase <span class="math-container">$\phi(t + \Delta t)$</span>. See a full discussion <a href="https://dsp.stackexchange.com/q/75064/11256">here</a> and also <a href="https://dsp.stackexchange.com/q/31394/11256">here</a>.</p> <p>As an example of a slightly more complicated relationship between time shift and phase shift, consider the signal <span class="math-container">$$x(t) = \cos(2\pi f_0 t + \phi_0) + \cos(2\pi f_1 t + \phi_1).$$</span> The delayed signal is <span class="math-container">$$x(t - \Delta t) = \cos( 2\pi f_0 t + 2\pi f_0 \Delta t + \phi_0) + \cos(2\pi f_1 t + 2\pi f_1 \Delta t + \phi_1).$$</span> You can see that the time delay resulted in a different phase shift for each of the sinusoidal components of <span class="math-container">$x(t)$</span>. Fourier tells us that all signals are made up of sums of sinusoids, and each one of them has a phase, so this approach can be generalized to all signals, even non-periodic ones, whose Fourier transform is a continuous sum of sinusoids.</p>
96
Fourier transform
complex numbers and fourier transform
https://dsp.stackexchange.com/questions/59468/complex-numbers-and-fourier-transform
<p>Is it possible to define a scaling property for fourier transform when the scale factor is complex? Usually the scaling factor is real. What happen when a scaling factor is complex? </p>
<p>there are issues. given this convention for the continuous Fourier transform (and inverse)</p> <p><span class="math-container">$$ \mathscr{F} \Big\{ x(t) \Big\} \triangleq X(f) \triangleq \int\limits_{-\infty}^{+\infty} x(t) e^{-j 2 \pi f t} \ \mathrm{d}t $$</span></p> <p><span class="math-container">$$ \mathscr{F}^{-1} \Big\{ X(f) \Big\} \triangleq x(t) = \int\limits_{-\infty}^{+\infty} X(f) e^{+j 2 \pi f t} \ \mathrm{d}f $$</span></p> <p>it changes the path of integration from the real axis to something else. this comes up when using this fact:</p> <p><span class="math-container">$$ \mathscr{F} \Big\{ e^{- \pi t^2} \Big\} = e^{- \pi f^2} $$</span></p> <p>to get, along with using scaling, this result:</p> <p><span class="math-container">$$ \mathscr{F} \Big\{ e^{j \pi t^2} \Big\} = \sqrt{j} \, e^{-j \pi f^2} $$</span></p> <p>which is a linearly-swept <em>"chirp"</em> and it's spectrum.</p>
97
Fourier transform
Windowing function for Inverse Fourier Transform
https://dsp.stackexchange.com/questions/70813/windowing-function-for-inverse-fourier-transform
<p>It is a common practice to apply windowing function, such as Hann or Hamming, to a time domain signal before FFT, in order to reduce spectral leakage. Often, we do 1) Windowing, 2) FFT, 3) frequency domain processing, such as filtering, then 4) Inverse FFT. My questions are: before inverse FFT, do we need to apply a windowing function in frequency domain as well? If we do, how?</p> <p>Thanks in advance.</p>
98
Fourier transform
FFT of a stretched vector
https://dsp.stackexchange.com/questions/74933/fft-of-a-stretched-vector
<p>Lets say I have a small size vector x=[a b c d]. Now I stretch this vector 3 times and I got x3=[a a a b b b c c c d d d]. What would be the relation between fft(x) and fft(x3)?</p>
<p>Conceptually you should split this into two steps</p> <ol> <li>Up-sample by a factor of 3, i.e. x = [a 0 0 b 0 0 ...]. This results in a 3 times periodic repetition of the spectrum.</li> <li>Convolve with a rectangular pulse of length three, i.e. h = [1 1 1]. This crates time stretched sequence you want. Convolution in time is multiplication in frequency, your three copies of the original spectrum get multiplied with a <span class="math-container">$sinc$</span> function.</li> </ol>
99
Laplace transform
Inverse Laplace transform of two-sided and one-sided Laplace transform
https://dsp.stackexchange.com/questions/54855/inverse-laplace-transform-of-two-sided-and-one-sided-laplace-transform
<p>As I read in <a href="https://en.wikipedia.org/wiki/Laplace_transform" rel="noreferrer">Wikipedia</a>, there are two types of Laplace transforms</p> <ul> <li><p>One-sided Laplace transform: <span class="math-container">$F(s) = \int_{0}^\infty e^{-st} f(t) dt$</span></p></li> <li><p>Two-sided Laplace transform: <span class="math-container">$F(s) = \int_{-\infty}^\infty e^{-st} f(t) dt$</span></p></li> </ul> <p>But they give only one formula for Inverse Laplace transform:</p> <p><span class="math-container">$\hspace{3.0cm} f(t) = \frac{1}{2\pi i} \lim_{T \to \infty} \int_{\gamma - i T}^{\gamma + i T} e^{st} F(s) ds$</span></p> <p>My question is that, does the type of Laplace transform I use affects the Inverse formula ?</p> <h3>p.s:</h3> <p>I've proved the Inverse Laplace transform above corresponding to Two-sided Laplace transform using Fourier transform. But I've not come up with any idea of proving the correctness of the Inverse Laplace transform corresponding to One-sided Laplace transform.</p> <p>According to my proof, the Inverse transform above is correct for One-sided transform if <span class="math-container">$f$</span> satisfies <span class="math-container">$f(t) = 0$</span> <span class="math-container">$\forall t &lt; 0$</span>. In other words,</p> <p><span class="math-container">$\hspace{3.0cm} f(t) = \frac{1}{2\pi i} \lim_{T \to \infty} \int_{\gamma - i T}^{\gamma + i T} e^{st} F(s) ds$</span>, <span class="math-container">$\forall t \geq 0$</span></p>
<p>The inversion formula is the same for both types of transforms:</p> <p><span class="math-container">$$f(t)=\frac{1}{2\pi j}\int_{\alpha-j\infty}^{\alpha+j\infty}F(s)e^{st}ds\tag{1}$$</span></p> <p>The difference is in the choice of the constant <span class="math-container">$\alpha$</span>. The line <span class="math-container">$\textrm{Re}\{s\}=\alpha$</span> must be inside the region of convergence (ROC). For causal functions (i.e., functions for which <span class="math-container">$f(t)=0$</span> for <span class="math-container">$t&lt;0$</span>), the ROC is to the right of the pole with the most positive real part, whereas for non-causal functions, the ROC is a vertical strip between two poles.</p>
100
Laplace transform
Is the Laplace transform redundant?
https://dsp.stackexchange.com/questions/26146/is-the-laplace-transform-redundant
<p>The Laplace transform is a generalization of the Fourier transform since the Fourier transform is the Laplace transform for $s = j\omega$ (i.e. $s$ is a pure imaginary number = zero real part of $s$).</p> <blockquote> <p>Reminder:</p> <p>Fourier transform: $X(\omega) = \int x(t) e^{-j\omega t} dt$</p> <p>Laplace transform: $X(s) = \int x(t) e^{-s t} dt$</p> </blockquote> <p>Besides, a signal can be exactly reconstructed from its Fourier transform as well as its Laplace transform.</p> <p>Since only a part of the Laplace transform is needed for the reconstruction (the part for which $\Re(s) = 0$), the rest of the Laplace transform ($\Re(s) \neq 0$) seems to be unuseful for the reconstruction...</p> <p>Is it true?</p> <p>Also, can the signal be reconstructed for another part of the Laplace transform (e.g. for $\Re(s)=5$ or $\Im(s)=9$)?</p> <p>And what happens if we compute a Laplace transform of a signal, then changing only one point of the Laplace transform, and compute the inverse transform: do we come back to the original signal?</p>
<p>The Fourier and the Laplace transform obviously have many things in common. However, there are cases where only one of them can be used, or where it's more convenient to use one or the other.</p> <p>First of all, even though in the definitions you simply replace $s$ by $j\omega$ or vice versa to go from one transform to the other, this cannot generally be done when given the Laplace transform $X_L(s)$ or the Fourier transform $X_F(j\omega)$ of a function. (I use different indices because the two functions can be different for the same time domain function). There are functions for which only the Laplace transform exists, e.g., $f(t)=e^{at}u(t)$, $a&gt;0$, where $u(t)$ is the Heaviside step function. The reason is that the integral in the definition of the Laplace transform only converges for $\Re\{s\}&gt;a$, which implies that the corresponding integral in the definition of the Fourier transform does not converge, i.e. the Fourier transform doesn't exist in this case.</p> <p>There are functions for which both transforms exist, but $X_F(j\omega)\neq X_L(j\omega)$. One example is the function $f(t)=\sin(\omega_0t)u(t)$, for which the Fourier transform contains Dirac delta impulses.</p> <p>Finally, there are also functions for which only the Fourier transform exists, but not the Laplace transform. This means that the integral in the definition of the Laplace transform only converges (in a specific sense) for $s=j\omega$, but for no other values of $s$. The Laplace transform is only said to exist if the integral converges in a half-plane or in a vertical strip of finite size of the complex $s$-plane. Such functions for which only the Fourier transform exists include complex exponentials and sinusoids ($-\infty&lt;t&lt;\infty$), and impulse responses of ideal brick-wall filters, which are related to the sinc function. So, e.g., the functions $f(t)=\sin(\omega_0 t)$ or $f(t)=\sin(\omega_ct)/\pi t$ do not have a Laplace transform but they do have a Fourier transform.</p> <p>The Laplace transform can be a convenient tool for analyzing the behavior of linear time-invariant (LTI) systems by considering their transfer function, which is the Laplace transform of their impulse response. The poles and zeros of the transfer function in the complex $s$-plane conveniently characterize many system properties and are useful for an intuitive understanding of the system's behavior. Furthermore, the <em>unilateral</em> Laplace transform is very useful for analyzing LTI systems with non-zero initial conditions. The Fourier transform is a useful tool for analyzing ideal (non-causal, unstable) systems, such as ideal low pass or band pass filters.</p> <p>Also have a look at <a href="https://dsp.stackexchange.com/questions/15351/fourier-transform-of-exponent-delta-pulse-or-hyperbola/15356#15356">this answer</a> to a related question.</p>
101
Laplace transform
Finding Laplace Transform without ROC
https://dsp.stackexchange.com/questions/27369/finding-laplace-transform-without-roc
<p>While studying Laplace Transform i found that region of convergence (ROC) is important because for some problems we have same Laplace Transform but different ROC helps us to take correct inverse Laplace Transform. So, Now i am practicing inverse Laplace Transform problem i found almost every probelm to find $x(t)$ is without ROC. So i want to ask that without have ROC how can we solve inverse Laplace Transform?</p>
<p>Strictly speaking you can't because without specifying the ROC, the inverse Laplace transform is generally not unique. However, in many contexts there is the implicit assumption of causality of the corresponding time function (i.e., $x(t)=0$ for $t&lt;0$), which is equivalent to stating that the ROC is a right half-plane.</p>
102
Laplace transform
From Fourier transform to Laplace Transform
https://dsp.stackexchange.com/questions/56171/from-fourier-transform-to-laplace-transform
<p>It's well known that you can estimate the Fourier Transform <span class="math-container">$X(f)$</span> of a signal <span class="math-container">$x(t)$</span> via its Laplace Transform <span class="math-container">$X(s)$</span>, just by setting <span class="math-container">$s = j2\pi f$</span> to the latter, as long as the region of convergence includes the imaginary axis. </p> <p>However, I do not have a clear view of how (and when) we can obtain the Laplace Transform via the Fourier Transform of a signal (which is the opposite of what I've stated before).</p> <p>For example, <span class="math-container">$x(t) = e^{-at}u(t)$</span> has a Fourier Transform <span class="math-container">$X(f) = \frac{1}{a+j2\pi f}$</span> as long as <span class="math-container">$a &gt; 0$</span>. Its Laplace Transform is <span class="math-container">$X(s) = \frac{1}{a+s}$</span>, for any value of <span class="math-container">$a$</span>, as long as <span class="math-container">$\mathrm{Re}\{s\} &gt; -a$</span>. We can see that if we set <span class="math-container">$j2\pi f = s$</span> to the Fourier Transform, we can directly obtain the Laplace Transform. The same holds for any rational function of <span class="math-container">$j2\pi f$</span>.</p> <p>Is there a theorem or something that can be clearly stated about it? It looks to me that it has something to do with the convergence of the Fourier integral (if it does converge, then the Laplace Transform converges as well).</p>
<p>You need to distinguish three cases:</p> <ol> <li><p>There are Dirac impulses in the expression for the Fourier transform. In this case you can't just replace <span class="math-container">$j\omega$</span> by <span class="math-container">$s$</span> to obtain the Laplace transform. The Laplace transform might not exist, or its form is different from the expression for the Fourier transform. A simple example for which the Laplace transform doesn't exist is <span class="math-container">$x(t)=e^{j\omega_0t}$</span> with Fourier transform <span class="math-container">$X(j\omega)=2\pi\delta(\omega-\omega_0)$</span>. An example for which the Laplace transform exists but for which it cannot be obtained by setting <span class="math-container">$j\omega=s$</span> is <span class="math-container">$x(t)=u(t)$</span> with Fourier transform <span class="math-container">$X(j\omega)=\pi\delta(\omega)+\frac{1}{j\omega}$</span> and Laplace transform <span class="math-container">$X_L(s)=\frac{1}{s}$</span>.</p></li> <li><p>There are no Dirac impulses in the expression for the Fourier transform, but replacing <span class="math-container">$j\omega$</span> by <span class="math-container">$s$</span> results in poles on the imaginary axis. In this case, replacing <span class="math-container">$j\omega$</span> by <span class="math-container">$s$</span> results in a valid expression for the Laplace transform, but it's the Laplace transform of a different time domain function. Example: <span class="math-container">$X(j\omega)=\frac{1}{j\omega}$</span> corresponds to <span class="math-container">$x(t)=\frac12\textrm{sgn}(t)$</span>, whereas <span class="math-container">$X(s)=\frac{1}{s}$</span> corresponds to <span class="math-container">$x(t)=u(t)$</span>.</p></li> <li><p>There are no Dirac impulses in the expression for the Fourier transform, and replacing <span class="math-container">$j\omega$</span> by <span class="math-container">$s$</span> does not result in any poles on the imaginary axis. In that case, the Laplace transform can be found by replacing <span class="math-container">$j\omega$</span> by <span class="math-container">$s$</span>. However, the expression for the Laplace transform generally corresponds to several different time-domain functions, depending on the chosen region of convergence (ROC). Only the one with the ROC including the imaginary axis corresponds to the time-domain function described by the given Fourier transform.</p></li> </ol>
103
Laplace transform
Why the unilateral Laplace transform?
https://dsp.stackexchange.com/questions/61733/why-the-unilateral-laplace-transform
<p>Why is the Laplace transform commonly taught as the unilateral Laplace transform?</p> <p>I mean, for the Fourier transform, we commonly have the bilateral transform... if the signal is 0 for <span class="math-container">$t&lt;0$</span>, then it turns into a unilateral Fourier transform. Why not have this same convention for Laplace transform? Why specifically introduce the unilateral version?</p>
<p>The widespread use of the unilateral Laplace transform reflects the fact that in practice we often deal with causal systems and signals that have a defined starting time (usually chosen as <span class="math-container">$t_0=0$</span>).</p> <p>The Fourier transform is mainly used for analyzing ideal signals and systems, such as ideal filters (e.g., low pass, high pass, etc.) and ideal signals such as perfect sinusoids. In these cases we have to deal with non-causal systems with impulse responses that extend from <span class="math-container">$-\infty$</span> to <span class="math-container">$\infty$</span>. The same is of course true for sinusoidal signals or complex exponentials. Note that neither ideal filters nor the signals mentioned above can be treated by the Laplace transform.</p> <p>One of the most important features of the unilateral Laplace transform is that it can be used to elegantly solve differential equations with initial conditions. The initial conditions are taken into account by the well-known differentiation property of the unilateral Laplace transform:</p> <p><span class="math-container">$$\mathcal{L}\{f'(t)\}=sF(s)-f(0^-)\tag{1}$$</span></p> <p>where <span class="math-container">$f(t)$</span> is a differentiable function, and <span class="math-container">$F(s)$</span> is its (unilateral) Laplace transform. The Fourier transform doesn't have an equivalent to <span class="math-container">$(1)$</span> for taking initial conditions into account. If the Fourier transform is to be used for solving a differential equation with non-zero initial conditions, then the initial conditions need to be modeled as additional sources.</p>
104
Laplace transform
Laplace Transform and Inverse laplace Transform for 2D images python code available?
https://dsp.stackexchange.com/questions/93330/laplace-transform-and-inverse-laplace-transform-for-2d-images-python-code-availa
<p>I am wondering if there is any implementation of Laplace Transform and Inverse Laplace Transform available for 2D data (i.e., images). For example, a batch of <code>N</code> input sequence of <code>D</code> can be reshaped into a 2D image with width of <code>W</code> and height of <code>H</code> and then 2D FFT is applied in both directions and then after processing it in the frequency domain, will be transformed back to the spatial domain using Inverse FFT, the <code>pytorch deep learning library</code> had the functions as follows:</p> <pre><code># x: (B x N x D) the token features, B x H x W x D (where N = H * W), B: batch size # K: the frequency-domain filter, H x W_hat x D (where W_hat = W // 2 + 1) X = rfft2(x, dim=(1, 2)) X_tilde = X * K x = irfft2(X_tilde, dim=(1, 2)) </code></pre> <p>Is there any similar implementation for Laplace Transform in Python?</p>
105
Laplace transform
Laplace transform of derivative
https://dsp.stackexchange.com/questions/82749/laplace-transform-of-derivative
<p>Here is a short proof that Laplace Transform of <span class="math-container">$x'(t)$</span> is Laplace transform of <span class="math-container">$x(t)$</span> multiplied by s:</p> <p><a href="https://i.sstatic.net/HMI1l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HMI1l.png" alt="enter image description here" /></a></p> <p>On the other hand, the proof that I know uses integration by parts:</p> <p><a href="https://i.sstatic.net/y6ohQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y6ohQ.png" alt="enter image description here" /></a></p> <p>One condition for the second proof is that <span class="math-container">$x(t)e^{-st}$</span> decays to zero as <span class="math-container">$|t|$</span> goes to <span class="math-container">$\infty$</span></p> <p>Why does the second proof require this condition while the first proof doesn't require any?</p>
<p>The existence of <span class="math-container">$\mathcal L\{x*\delta'\}=\mathcal L\{x\}\cdot\mathcal L\{\delta'\}$</span> requires the same subexponential behaviour from <span class="math-container">$x$</span> (if <span class="math-container">$\lim_{|t|\to\infty}xe^{-st} \ne 0$</span>, then <span class="math-container">$\mathcal L \{x\}$</span> doesn't exist). So, that's not a weaker requirement.</p>
106
Laplace transform
Laplace transform of averaging operator
https://dsp.stackexchange.com/questions/36106/laplace-transform-of-averaging-operator
<p>I am studying dc-dc converter now. I got a problem with Laplace transform of the averaging operator as in the image below.</p> <p>Can anyone help me derive the Laplace transform result $G_{av}(s)$ as in the image?</p> <p><a href="https://i.sstatic.net/VBXLQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VBXLQ.png" alt="enter image description here"></a></p>
<p>Here's the outline of the argument, feel free to fill in the details.</p> <p>The averaging operator is like a convolution with a "square" pulse of height $1/T_s$ supported on the interval $[-T_s/2, T_s/2]$. </p> <p>You can express the square pulse as a sum of two heaviside step functions. </p> <p>Finally, recall the Laplace transform of a step function $\mathcal{L} \{H(t-T_s/2)\}(s) = \frac{e^{-sT_s/2}}{s}.$</p>
107
Laplace transform
Confusion in basics of Laplace Transform
https://dsp.stackexchange.com/questions/27179/confusion-in-basics-of-laplace-transform
<p>I have few confusions while starting Laplace Transform. So far I have studied, Fourier series and Fourier Transform. The basic difference which I found from different books is Fourier Transform is only considered the imaginary part whereas the Laplace transform considers both real and imaginary for general values.<br> i) I want to ask that, is it only difference we have in Laplace and Fourier Transform? Then i saw the two different equations of Laplace transform which is bilateral Laplace Transform $$ X(s) = \int _{-\infty}^{+\infty} x(t) e^{-st} dt$$ Whereas in the second equation of Laplace Transform is called unilateral Laplace Transform and it defined as: $$ X(s) = \int _{0}^{+\infty} x(t) e^{-st}dt $$ It omit the negative part and only have for $t&gt;0$</p> <p>ii) Here I want to ask that what is the reason of omitting the $t&lt;0$ part?</p> <p>And Lastly, there was an example $$ x(t)=e^{-at}u(t)$$ After applying Laplace transform it was written The transform exists only if $Re(s+a)$ is possitive.<br> iii) Now here is am confuse that why it took for possitive?</p>
<p>The unilateral Laplace transform is used for analyzing causal linear time-invariant systems, which have an impulse response $h(t)$ that is zero for $t&lt;0$. The unilateral Laplace transform can be used to solve initial value problems, due to the correspondence</p> <p>$$x'(t)\Longleftrightarrow sX(s)-x(0)$$</p> <p>where $x(0)$ is a given initial value for the function $x(t)$. Note that for the bilateral Laplace transform the equivalent correspondence is simply $x'(t)\Longleftrightarrow sX(s)$.</p> <p>Concerning the signal $x(t)=e^{-at}u(t)$, note that its Laplace transform is</p> <p>$$X(s)=\int_0^{\infty}e^{-(a+s)t}dt$$</p> <p>This integral only converges if the exponential decays, which results in the condition $\text{Re}(a+s)&gt;0$. This condition defines the region of convergence (ROC) of the Laplace transform. $X(s)$ only exists for values of the complex variable $s$ satisfying $\text{Re}(s)&gt;-\text{Re}(a)$.</p>
108
Laplace transform
Intuitive interpretation of Laplace transform
https://dsp.stackexchange.com/questions/11008/intuitive-interpretation-of-laplace-transform
<p>So I am getting to grasps with Fourier transforms. Intuitively now I definately understand what it does and will soon follow some classes on the mathematics (so the actual subject). But then I go on reading about the laplace transform and there I kind of lose it. What is the moment of a signal? Why is the fourier transform a special case of the laplace transform? How can I come to grips with the Laplace transform?</p> <p>Ive looked at these sources before I asked this question:</p> <p><a href="https://dsp.stackexchange.com/questions/536/what-is-meant-by-a-systems-impulse-response-and-frequency-response/539#539">What is meant by a system&#39;s &quot;impulse response&quot; and &quot;frequency response?&quot;</a></p> <p><a href="https://dsp.stackexchange.com/questions/8769/how-to-distinguish-between-the-different-frequency-domains/8770#8770">How to distinguish between the different frequency domains?</a></p> <p><a href="https://dsp.stackexchange.com/questions/2721/amplitude-vs-frequency-response">Amplitude vs Frequency Response</a></p> <p><a href="https://dsp.stackexchange.com/questions/69/why-is-the-fourier-transform-so-important/70#70">Why is the Fourier transform so important?</a></p> <p><a href="http://en.wikipedia.org/wiki/Laplace_transform" rel="noreferrer">http://en.wikipedia.org/wiki/Laplace_transform</a></p>
<p>If you have an understanding of Fourier transforms then you probably already have a conceptual model of transforming signals into the frequency domain. The Laplace transform provides an alternative frequency domain representation of the signal - usually referred to as the "S domain" to differentiate it from other frequency domain transforms (such as the Z transform - which is essentially a descretised equivalent of the Laplace transform).</p> <p><strong>What is the moment of a signal?</strong></p> <p>As you are no doubt aware the Laplace transform gives us a description of a signal from it's moments, similar to how the Fourier transform gives us a description from phase and amplitudes.</p> <p>Broadly speaking a moment can be considered how a sample diverges from the mean value of a signal - the first moment is actually the mean, the second is the variance etc... (these are known collectively as "moments of a distribution")</p> <p>Given our function F(t) we can calculate the n'th derivative at t=0 to give our n'th moment. Just as a signal can be described completely using phase and amplitude, it can be described completely by all of its derivatives. </p> <p><strong>Why is the fourier transform a special case of the laplace transform?</strong></p> <p>If we look at the bilateral laplace transform:</p> <p>$${\int_{-\infty}^\infty}e^{-st}f(t)dt$$</p> <p>It should be quite apparent that a substitution $s=i\omega$ will yield the familiar Fourier transform equation:</p> <p>$${\int_{-\infty}^\infty}e^{-i\omega t}f(t)dt$$</p> <p>There are some notes about this relationship (<a href="http://en.wikipedia.org/wiki/Laplace_transform#Fourier_transform" rel="noreferrer">http://en.wikipedia.org/wiki/Laplace_transform#Fourier_transform</a>) but the mathematics should be quite transparent.</p>
109
Laplace transform
Basic difference between Fourier transform and laplace transform?
https://dsp.stackexchange.com/questions/58413/basic-difference-between-fourier-transform-and-laplace-transform
<p>I have read few links about difference between Fourier transform and Laplace transform but still not satisfied</p> <p>Please correct me if i am wrong Simply put, the main difference between Fourier transform and Laplace transform is that real part is set to zero in Fourier transform while real part is non zero in laplace transform?</p>
<p>Fourier transform is an <strong>intuitive</strong> tool that's a bridge between domain of physics and mathematics, as it quantitatively describes the periodic content of the signals and also frequency response characterisation of systems that occur in physical (and engineering) applications. The use of frequencies is quite intuitive and consistent at least for <strong>stable</strong> systems...</p> <p>However for <strong>unstable</strong> systems (and signals) Fourier transforms becomes mostly awkward (if not useless) to deal with. However control engineers unavoidably must make frequent use of unstable systems in their works. And for this purpuse, Fourier transform is either insufficient or awkward, hence a generalisation of the existing Fourier transform is made into the Laplace transform which conveniently yields mathematical (complex algebric) descriptions of stable as well as unstable systems which was not possible with the Fourier. The Laplace transform, therefore, includes a <em>region of convergence</em> parameter into it.</p> <p>Another difference between the two transforms is in the time-domain <strong>transient</strong> analysis of <strong>output</strong> of LTI systems driven under nonzero initial conditions which is successfully captured in the Laplace transform only. In the sense that LCCDE with initial conditions are straightforwardly solvable by (unilateral) Laplace transforms whereas the standard FT can only solve LCCDE with zero initial conditions (initial rest)...</p> <p>For one <strong>sided and two sided</strong> differences, I think Stanley has things to say...</p>
110
Laplace transform
Questions related to Laplace Transform
https://dsp.stackexchange.com/questions/27189/questions-related-to-laplace-transform
<p>While studying Laplace transform, I also some questions which I want to understand: </p> <p>a) We used to say that Laplace transform include both real and imaginary part whereas in Fourier transform we only have imaginary part. But when we have to say about convergence we also choose Real part to be either >0 or &lt;0 . I want to know why we ignore imaginary part? </p> <p>b) If we have any function x(t) how do we determine that we have to take bilateral integral or unilateral integral. In the above case we have u(t) with the function so our limits are changed. But if we don't have $u(t)$ function with the exponential like $$ x(t)=e^{at}$$ than how can we select bilateral or unilateral </p> <p>Q2: what will be the Laplace Transform of $$f(t) = e^{at}$$ </p>
<p>The Fourier transform is the Laplace transform along the imaginary axis in the complex plane.</p> <p>The convergence of the Laplace transform ignores the complex part, as the imaginary part breaks down the signal into sinusoids: which are bounded, and so have no effect on the convergence.</p>
111
Laplace transform
Confusions regarding differences between Fourier transform &amp; Laplace transform?
https://dsp.stackexchange.com/questions/79569/confusions-regarding-differences-between-fourier-transform-laplace-transform
<p>Although this topic has already been addressed in multiple popular questions of SE but i have few confusions in this regard</p> <p>Number 1)</p> <p>Link of question <a href="https://electronics.stackexchange.com/questions/86489/relation-and-difference-between-fourier-laplace-and-z-transforms">https://electronics.stackexchange.com/questions/86489/relation-and-difference-between-fourier-laplace-and-z-transforms</a></p> <p>In the top voted answer there was a sentence that i highlighted</p> <blockquote> <p>If we set the real part of the complex variable s to zero, <span class="math-container">$\sigma=0$</span>, the result is the Fourier transform <span class="math-container">$F(j\omega)$</span> <strong>which is essentially the frequency domain representation of <span class="math-container">$f(t)$</span></strong></p> </blockquote> <p>So what does that imply? If Fourier transform is essentially the frequency domain representation of <span class="math-container">$f(t)$</span> then does that mean Laplace transform is not frequency domain representation</p> <p>Number 2)</p> <p>Link of question</p> <p><a href="https://dsp.stackexchange.com/questions/45910/what-are-the-advantages-of-laplace-transform-vs-fourier-transform-in-signal-theo">What are the advantages of Laplace Transform vs Fourier Transform in signal theory?</a></p> <p>In top voted answer, there was a sentence</p> <blockquote> <p>Laplace transforms can capture the transient behaviors of systems. <strong>Fourier transforms only capture the steady state behavior.</strong></p> </blockquote> <p>What does that imply? Fourier transform cannot be used for studying the transient behavior and Laplace transform cannot be used for studying steady state behaviour?</p> <p><strong>One last confusion: Which transform is more commonly used in practial applications ?Laplace transform or Fourier transform?</strong></p>
<p>Concerning your first question, both, the Laplace and the Fourier transform, are frequency domain representations of a function or signal. In the Fourier transform we deal with a real-valued frequency variable <span class="math-container">$\omega$</span>, whereas in the Laplace transform we have a generally complex-valued independent variable (usually <span class="math-container">$s$</span>), the imaginary part of which equals frequency: <span class="math-container">$s=\sigma+j\omega$</span>.</p> <p>Your second question can be answered in a very simple way: the quoted sentence is wrong (which I also mentioned in a comment to that answer).</p> <p>As for &quot;practical application&quot;, I would say that when you talk about causal systems implemented with lumped elements, then the (unilateral) Laplace transform is probably used more often. It is also more straightforward to take initial conditions into account when using the unilateral Laplace transform.</p> <p>The Fourier transform is more suited to idealized systems such as ideal frequency-selective filters (lowpass, highpass, etc.). Note that the latter cannot be treated by the Laplace transform. I point this out because another common misconception is that the Laplace transform is more general than the Fourier transform. It is not, both transforms have their merits for solving certain problems.</p> <p>You should search for more questions and answers on Fourier and Laplace transforms on <em>this</em> site, many things have been said already.</p>
112
Laplace transform
Laplace transform plot isn&#39;t right
https://dsp.stackexchange.com/questions/75543/laplace-transform-plot-isnt-right
<p>I'm trying to plot the Laplace transform of a function. Here's my MatLab script</p> <pre><code>clear clc syms t L = 100; sigma=(-10:0.1:(10-0.1)); omega = (-L/2:L/2-1)*(2*pi*0.1); x = sin(2 * pi * t); X_symbolic = laplace(x); X = matlabFunction(X_symbolic); result = []; for j=1:length(omega) resultCol = []; for k=1:length(sigma) sValue = sigma(k) + 1i*omega(j); resultCol = [resultCol abs(X(sValue))]; end result = [result ; resultCol]; end mesh(sigma, omega, result) xlabel('Real Axis(\sigma)', 'fontsize', 13) ylabel('Imaginary Axis(\omega)', 'fontsize', 13) zlabel('Magnitude', 'fontsize', 13) ylim([-30 30]) </code></pre> <p>Here's my output <a href="https://i.sstatic.net/ef9pP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ef9pP.png" alt="enter image description here" /></a> And here's the desired output <a href="https://i.sstatic.net/iOJBl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iOJBl.png" alt="enter image description here" /></a></p> <p>As you can see the desired output plot includes the two poles (±6.28319i). Where did I go wrong?</p>
113
Laplace transform
Unilateral Laplace Transform&#39;s Differentiation Property
https://dsp.stackexchange.com/questions/74093/unilateral-laplace-transforms-differentiation-property
<p>I've read in numerous places that the unilateral laplace transform is extermely useful in solving differential equations with initial conditions based on the differentiation property of the unilateral transform:</p> <p><span class="math-container">$\mathscr{L}{f′(t)}=sF(s)−f(0_−)$</span></p> <p>What i don't understand is why this is possible to do in Laplace but not in the Fourier Transform? Is this related to the dacaying exponential added by the Laplace transform?</p> <p>I would be grateful for an in-depth answer involving intuition.</p>
<p>You need to look at the derivation of that property. Integration by parts gives</p> <p><span class="math-container">$$\begin{align}\mathcal{L}\{f'(t)\}&amp;=\int_{0^-}^{\infty}f'(t)e^{-st}dt\\&amp;=f(t)e^{-st}\Big|_{0^-}^{\infty}+s\underbrace{\int_{0^-}^{\infty}f(t)e^{-st}dt}_{F(s)}\\&amp;=\lim_{t\to\infty}f(t)e^{-st}-f(0^-)+sF(s)\tag{1}\end{align}$$</span></p> <p>The first term in <span class="math-container">$(1)$</span> is only guaranteed to vanish for <span class="math-container">$t\to\infty$</span> if <span class="math-container">$\text{Re}\{s\}&gt;\alpha$</span> for some value <span class="math-container">$\alpha$</span>. So even if you define a unilateral Fourier transform, that term may not vanish in general.</p> <p>However, if <span class="math-container">$\lim_{t\to\infty}f(t)=0$</span> holds, we could use the same property with a unilateral Fourier transform. It's just much more common to use the well-established (unilateral) Laplace transform in cases where non-zero initial conditions need to be taken into account.</p>
114
Laplace transform
Conversion from laplace transform to z-transform
https://dsp.stackexchange.com/questions/14483/conversion-from-laplace-transform-to-z-transform
<p>I would like to know if</p> <p>$$ \text {Z-Transform ( } G(s)H(s) \text{ )} = \text {Z-Transform (}G(s) \text{)} \text { Z-Transform (} H(s) \text{) } = G(z)H(z) $$</p> <p>where G(s), H(s) are the Laplace transform representations of g and h, and G(z) and H(z) are the Z-transform representation of g and h.</p> <p>Is this relation true?</p>
<p>Your question makes no sense. Z transform is performed on a discrete signal/series.</p> <p>Since $H(s)$ is a continuous function, you can't just calculate a Z-transform of $H(s)$ without first sampling it, to make it discrete. Also, it doesn't make much sense to do a time->spectrum transform (such as a Z-transform) on a spectral representation ($H(s)$)</p> <p>I'm assuming that when you write "$Z-Transform(H(s))$", what you really want to do is to convert $H(s)\to H(z)$, meaning to calculate the Z-transform of $h[nT]$, where $h[nT]$ is $h(t)$, sampled at intervals of $T$, and $h(t)$ is the inverse Laplace transform of $H(s)$.</p> <p>If I'm correct in my assumption, the transformation you are seeking is known as "<a href="http://en.wikipedia.org/wiki/Star_transform" rel="nofollow">star transform</a>", which would provide a transform function $H^{*}(s)$, in terms of $e^{sT}$, which may be easily converted to $H(z)$ by way of the substitution $z=e^{sT}$.</p> <p><strong>Edit</strong> - some elaboration on the conversion process: what you need to do is calculate $H^{*}(s)$ from $H(s)$ using one of the two relations described in "Relation to Laplace transform" in the <a href="http://en.wikipedia.org/wiki/Star_transform" rel="nofollow">Wikipedia article</a>, then do the substitution $z=e^{sT}$, to get $H(z)$.</p>
115
Laplace transform
confused about time shifting property of Laplace Transform
https://dsp.stackexchange.com/questions/54400/confused-about-time-shifting-property-of-laplace-transform
<p>In book signals and systems 2 edition a question is given which is as follows:</p> <p><span class="math-container">$$ x(t)=e^{-3(t+1)}u(t+1) $$</span></p> <p>and we are asked to find the unilateral Laplace Transform of the signal. The method that is given in the solution manual is as follows:</p> <p>Using Table 9.2 and time shifting property we get:</p> <p><span class="math-container">$$ X_2(s) = \frac{e^s}{s+3} $$</span></p> <p>Now I am given a question which is as follows:</p> <p><span class="math-container">$$ e^{-2t}u(t-1) $$</span> and asked to find the Laplace Transform. Now can I apply the method as used above for unilateral Laplace Transform and get:</p> <p><span class="math-container">$$ \frac{e^{-s}}{s+2} \rightarrow A $$</span></p> <p>Or does that method only holds true for unilateral Laplace Transforms? Because the answer marked A is wrong when I use this method. Also tell me when can I apply the property?</p>
<p>If you have written the function correctly then its Laplace transform could be found very similary to your first example:</p> <p>Given <span class="math-container">$$x(t) = e^{-2 t} u(t-1)$$</span> its Laplace transform could be found as follows. First denote the signal</p> <p><span class="math-container">$$x_0(t) = e^{-2} e^{-2t} u(t) $$</span> </p> <p>then its obvious that <span class="math-container">$$x(t) =x_0(t-1) $$</span></p> <p>Using the tables and properties to conclude:</p> <p><span class="math-container">$$X(s) = e^{-s} X_0(s) $$</span> <span class="math-container">$$X(s) = e^{-s} \frac{ e^{-2} }{s + 2} = \frac{ e^{-(s+2)} }{s + 2} $$</span> </p>
116
Laplace transform
Inverse Laplace transform Using Inversion Formula
https://dsp.stackexchange.com/questions/30701/inverse-laplace-transform-using-inversion-formula
<blockquote> <p>Use the complex inversion formula to calculate the inverse Laplace transform $f(t)$ of the following Laplace transform: $$F_L (s) = \frac{1}{(s+2)(s^2 +4)}.$$ When the region of convergence is: \begin{align}(1)&amp; \quad Re(s)&lt;-2;\\(2)&amp;\quad -2&lt;Re(s)&lt;0;\\(3)&amp;\quad Re(s)&gt;0.\end{align}</p> </blockquote> <p><strong>Attempt:</strong></p> <p><a href="https://i.sstatic.net/0oPji.jpg" rel="nofollow noreferrer">Here</a> is an explanation of the complex inversion formula. Plugging the function into the formula:</p> <p>$$f(t) = \frac{1}{j2\pi} \int^{\sigma + j \infty}_{\sigma - j \infty} \frac{e^{st}}{(s+2)(s^2+4)}\ ds \tag{1}$$</p> <ul> <li>So, how do I need to choose $\sigma$? </li> <li>And how do I evaluate this for each of the three regions?</li> </ul> <p><strong>P. S.</strong> I tried to solve this without the complex inversion formula, just to see what the answer should look like. I started out by expanding using partial fractions as: \begin{align} \frac{1}{(s+2)(s^2 +4)}&amp;= \frac{1}{8} \left( \frac{1}{s+2} + \frac{1}{s^2+4} \right)\\ &amp;=\frac{1}{8} \left( \frac{1}{s+2} + \frac{1}{(s+2j)(s-2j)} \right)\\ &amp;=\frac{1}{8} \left( \frac{1}{s+2} + \frac{j}{4(s+2j)} - \frac{j}{4(s-2j)} \right). \end{align}</p> <p>Looking at a Laplace transform table, $\frac{1}{s-a} \leftrightarrow e^{at},$ so</p> <p>$$f(t) = \frac{1}{8} \left( e^{-2t} + \frac{j}{4} \left(e^{-2jt} + e^{2jt}\right) \right).$$</p> <ul> <li>Is this correct? </li> <li>If so, how can I get to this using the complex inversion formula?</li> </ul>
<p>In engineering practice, the complex inversion integral is hardly ever used. As an engineer, you will almost exclusively need to invert rational functions, and this can be done by partial fraction expansion and elementary inversions. So first I'll show you how to obtain the inverse Laplace transform by partial fraction expansion, then I'll explain the evaluation of the inversion integral using Cauchy's residue theorem.</p> <p>You have an error in the partial fraction expansion. Furthermore, you don't need to split up the complex pole pair. I would rewrite the Laplace transform like this:</p> <p>$$F(s)=\frac{1}{(s+2)(s^2+4)}=\frac{A}{s+2}+\frac{Bs+C}{s^2+4}\tag{1}$$</p> <p>with $A=\frac18$, $B=-\frac18$, and $C=\frac14$. The terms on the right-hand side of $(1)$ are elementary Laplace transforms. Now you just have to consider the different regions of convergence (ROC):</p> <p>$$\begin{align}\frac{1}{s+2}&amp;\Longleftrightarrow e^{-2t}u(t),&amp;\quad \text{Re}\{s\}&gt;-2\\\frac{1}{s+2}&amp;\Longleftrightarrow -e^{-2t}u(-t),&amp;\quad \text{Re}\{s\}&lt;-2\\ \frac{s}{s^2+4}&amp;\Longleftrightarrow \cos(2t)u(t),&amp;\quad\text{Re}\{s\}&gt;0\\ \frac{s}{s^2+4}&amp;\Longleftrightarrow -\cos(2t)u(-t),&amp;\quad\text{Re}\{s\}&lt;0\\ \frac{1}{s^2+4}&amp;\Longleftrightarrow \frac12\sin(2t)u(t),&amp;\quad\text{Re}\{s\}&gt;0\\ \frac{1}{s^2+4}&amp;\Longleftrightarrow -\frac12\sin(2t)u(-t),&amp;\quad\text{Re}\{s\}&lt;0 \end{align}$$</p> <p>So for the ROC $\text{Re}\{s\}&lt;-2$ you get the anti-causal signal $$f(t)=\frac18\left[-e^{-2t}+\cos(2t)-\sin(2t)\right]u(-t)\tag{2}$$</p> <p>For the ROC $-2&lt;\text{Re}\{s\}&lt;0$ you get the two-sided signal</p> <p>$$f(t)=\frac18\left[e^{-2t}u(t)+(\cos(2t)-\sin(2t))u(-t)\right]\tag{3}$$</p> <p>And, finally, for the ROC $\text{Re}\{s\}&gt;0$ you get the causal signal $$f(t)=\frac18\left[e^{-2t}-\cos(2t)+\sin(2t)\right]u(t)\tag{4}$$</p> <p><hr> If you need to use the inversion formula then it is very helpful to know <a href="https://en.wikipedia.org/wiki/Residue_theorem" rel="nofollow">Cauchy's residue theorem</a>, which says that</p> <p>$$\frac{1}{2\pi j}\oint_Cf(s)ds=\sum_kR_k\tag{5}$$</p> <p>where $f(s)$ is analytic with finitely many poles, $C$ is a positively oriented closed curve, and $R_k$ are the residues of the poles inside $C$. It can be shown that the inversion integral equals a contour integral if the curve $C$ is chosen appropriately:</p> <p>$$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds= \frac{1}{2\pi j}\oint_CF(s)e^{st}ds\tag{6}$$</p> <p>In the case of a rational function $F(s)$ the curve $C$ is chosen as a Bromwich contour, as shown <a href="http://www.solitaryroad.com/c916.html" rel="nofollow">here</a> in Fig.2. The straight line part of the curve is the actual integration path we're interested in. The contribution from the circular part of $C$ approaches zero. Depending on the chosen ROC, we have to choose the position of the straight line (i.e., the value of $\sigma)$ differently. For ROC $\text{Re}\{s\}&lt;-2$ (i.e., the anti-causal solution), the straight line is anywhere to the left of the left-most pole, and so the Bromwich contour enclosing all poles is negatively oriented, which results in a sign change:</p> <p>$$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds=-\sum_kR_k,\quad\sigma&lt;-2,\quad t&lt;0\tag{7}$$</p> <p>where $R_k$ are the residues corresponding to the poles of the function $f(s)=F(s)e^{st}$. For ROC $\text{Re}\{s\}&gt;0$ (i.e., the causal solution), the straight line is anywhere to the right of the right-most pole, and the Bromwich contour enclosing all poles is positively oriented. Consequently, we have</p> <p>$$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds=\sum_kR_k,\quad\sigma&gt;0,\quad t&gt;0\tag{8}$$</p> <p>For the two-side solution with ROC $-2&lt;\text{Re}\{s\}&lt;0$ we need to choose two curves with the straight line inside the ROC. One encloses the pole to its left at $s=-2$, so the curve is positively oriented, and the other one encloses the two poles at $s=2j$ and $s=-2j$ to the right of the straight line, so it is negatively oriented (which adds a negative sign to the corresponding residues). Let $R_1$ be the residue corresponding to the pole at $s=-2$, and let $R_2$ and $R_3$ be the residues of the two poles at $\pm 2j$, respectively. The inversion integral is then given by:</p> <p>$$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds=\begin{cases}R_1,&amp;t&gt;0\\-R_2-R_3,&amp;t&lt;0\end{cases},\quad-2&lt;\sigma&lt;0\tag{9}$$</p> <p>What remains is the computation of the residues. The residue at pole $p_k$ is given by $$R_k=\lim_{s\rightarrow p_k}(s-p_k)f(s)\tag{10}$$</p> <p>With $f(s)=F(s)e^{st}$ we get for $p_1=-2$</p> <p>$$R_1=\lim_{s\rightarrow -2}(s+2)F(s)e^{st}=\lim_{s\rightarrow -2}\frac{e^{st}}{s^2+4}=\frac{e^{-2t}}{8}\tag{11}$$</p> <p>In a similar manner the other residues are obtained:</p> <p>$$\begin{align}R_2&amp;=-\frac{e^{2jt}}{8}\frac{1+j}{2}\\ R_3&amp;=-\frac{e^{-2jt}}{8}\frac{1-j}{2}\end{align}\tag{12}$$</p> <p>Now $(7)-(9)$ can be evaluated, and the results are of course the same as $(2)-(4)$.</p>
117
Laplace transform
Confusion regarding Laplace transform calculation in MATLAB
https://dsp.stackexchange.com/questions/83278/confusion-regarding-laplace-transform-calculation-in-matlab
<p>I am trying to learn about Laplace transform and especially about ROC and i found out on <a href="http://jntuhsd.in/uploads/programmes/Module15_LT_13.01_.2017_.PDF" rel="nofollow noreferrer">this weblink.</a></p> <p>I have also attached a snapshot of this link and highlighted where it is being said that although the signals are differing, their Laplace Transforms are identical</p> <p>My MATLAB code</p> <pre><code>clc clear close all syms s t a x1=exp(-a*t)*heaviside(t) x2=-exp(-a*t).*heaviside(-t) X1=laplace(x1,s) X2=laplace(x2,s) </code></pre> <p>When i run above script i get X2=0, but as per above mentioned web link , i should have got <span class="math-container">$1/(s+a)$</span></p> <p>Why i am getting different values of X1 and X2?</p> <p><a href="https://i.sstatic.net/wXKvi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wXKvi.png" alt="enter image description here" /></a></p> <p>I have also attached snapshot of above weblink</p>
<p>The <a href="https://www.mathworks.com/help/symbolic/sym.laplace.html" rel="nofollow noreferrer">Matlab implementation</a> of the Laplace transform computes the <em>uni-lateral</em> (one-sided) Laplace transform:</p> <p><a href="https://i.sstatic.net/F19tu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F19tu.png" alt="Screenshot of Laplace page on Mathworks website" /></a></p> <p>Since <span class="math-container">$x_2(t)$</span> is zero for <span class="math-container">$t\gt0$</span>, the result of the uni-lateral Laplace transform must be zero.</p>
118
Laplace transform
Confusion in proof of Inverse Laplace Transform
https://dsp.stackexchange.com/questions/27288/confusion-in-proof-of-inverse-laplace-transform
<p>For the proof of inverse Laplace transform, we change the integral from $\omega$ to $s$. I want to know the reason why we need to change the integral? <a href="https://i.sstatic.net/qjRzE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qjRzE.png" alt="enter image description here"></a></p>
<p>To summarize the discussion:</p> <ul> <li><p>The usual substitution is $s = \sigma + j \omega$ where $\sigma$ is the real part of the $s$ variable and $\omega$ is the imaginary part.</p></li> <li><p>The equation in the image is for the <strong>Fourier</strong> transform, not the <strong>Laplace</strong> transform. The Fourier transform can be thought of as the Laplace transform evaluated on the imaginary axis ($\sigma = 0$).</p></li> <li><p>The differential $ds$, when looking at real and imaginary parts distinctly, becomes $d\sigma + j d\omega$.</p></li> <li><p>Any <strong>differential</strong> is an infinitesimal (very small) change in that variable. $dx$ is a small change in $x$.</p></li> </ul>
119
Laplace transform
Time Setting of $z$ and Laplace Transforms
https://dsp.stackexchange.com/questions/56821/time-setting-of-z-and-laplace-transforms
<p>I'm aware that the z-transform and the Laplace Transform have an analogous relationship but I want to be doubly-sure that the z-transform only works in discrete-time and that the Laplace transform only works in the continuous time settings?</p> <p>Thanks.</p>
<p>In typical (or all) applications, the Laplace Transform is used for continuous time systems and the z-Transform for discrete systems. However, the Laplace Transform for Discrete Time Systems certainly exists but would be more complicated than necessary to solve. The z transform exists as a mathematical simplification of the Laplace Transform that can be applied if a consistent sampling rate is used. This may be clearer with the formulas given below:</p> <p>The (one-sided, for causal systems) Laplace Transform is given as:</p> <p><span class="math-container">$$X(s) = \int_{t=0}^\infty x(t) e^{-st}dt$$</span> </p> <p>Thus <strong>the Laplace Transform for a discrete time system</strong> is given simply by setting t= nT for integer n (where the integral then becomes a summation):</p> <p><span class="math-container">$$X(s) = \sum_{n=0}^\infty x(nT)e^{-snT}$$</span></p> <p>At this point we can make a substitution to get rid of the more complicated exponential (which is one possible mapping from s to z):</p> <p><span class="math-container">$$z = e^{sT}$$</span></p> <p>resulting in the z-Transform for a discrete-<strong>time</strong> sequence:</p> <p><span class="math-container">$$X(z) = \sum_{n=0}^\infty x(nT)z^{-n}$$</span></p> <p>Equivalently the z-Transform for a discrete sequence (no units of time, simply sample number) is:</p> <p><span class="math-container">$$X(z) = \sum_{n=0}^\infty x(n)z^{-n}$$</span></p> <p>Thus we see that the Laplace Transform of a discrete time system exists (but we have no need to solve that typically as working with the z transform once discrete is much simpler). I am not aware of a reverse case although we may be able to create one by taking the limit <span class="math-container">$T\rightarrow 0$</span> of the discrete time z-Transform. I haven't worked through this but we may find the same situation (we can force it to be mathematically equivalent but the math involved would be more complicated than necessary vs taking the Laplace Transform directly). </p>
120
Laplace transform
How to compute Laplace Transform in Python?
https://dsp.stackexchange.com/questions/66428/how-to-compute-laplace-transform-in-python
<p>I am trying to do practicals for signal processing where I need to Laplace Transform a function. Used 'fft' of numpy before. Nothing of Laplace is found in the documentation. Do we have any other alternative?</p> <p>Please go through the notebook to understand the problem (would love to get suggestions/contributions]</p> <p><a href="https://github.com/sachinmotwani20/Raw-Signal-Processing-Python/blob/master/LaplaceToFourier.ipynb" rel="noreferrer">https://github.com/sachinmotwani20/Raw-Signal-Processing-Python/blob/master/LaplaceToFourier.ipynb</a></p>
<p>Given the approach started in the OP's Github code I have this suggestion:</p> <p>Observe that the unilateral Laplace Transform given as:</p> <p><span class="math-container">$$X(s) = \int_0^\infty x(t)e^{-st}dt$$</span></p> <p>Is just the Fourier Transform of a causal function with a weighting exponential:</p> <p><span class="math-container">$$X(s) = \int_0^\infty x(t)e^{-(\sigma+j\omega)t}dt$$</span></p> <p><span class="math-container">$$X(s) = \int_0^\infty [e^{-\sigma t}x(t)]e^{-j\omega t}dt$$</span></p> <p>Which is the Fourier Transform of <span class="math-container">$e^{-\sigma t}x(t)$</span>.</p> <p>So to proceed with a graphical solution, the first step is to learn how to <a href="https://matplotlib.org/examples/mplot3d/surface3d_demo.html" rel="noreferrer">produce surface plots in python</a>, and then index through <span class="math-container">$\sigma$</span> within the Region of Convergence (see below) and compute the FFT of <span class="math-container">$e^{-\sigma t}x(t)$</span> to create the complex surface values given <span class="math-container">$\sigma$</span> and <span class="math-container">$\omega$</span> as the magnitude of the complex result.</p> <p>More generally when the goal is to simply compute the Laplace (and inverse Laplace) transform directly in Python, I recommend using the SymPy library for symbolic mathematics. For example below I show an example in python to compute the impulse response of the continuous time domain filter further detailed in this post by using SymPy to compute the inverse Laplace transform:</p> <pre><code>import sympy as sp s, t = sp.symbols('s t') trans_func = 1/((s+0.2+0.5j)*(s+0.2-0.5j)) result = sp.inverse_laplace_transform(trans_func, s, t) </code></pre> <p>Which will return as result the following:</p> <p><span class="math-container">$$2.0e^{-0.2t}\sin(0.5t)\theta(t)$$</span></p> <p>Where <span class="math-container">$\theta(t)$</span> is the unit step function, which is shorthand for saying the result applies for <span class="math-container">$t\ge0$</span> and is zero elsewhere.</p> <hr /> <p><strong>Further Details On a Graphical Solution</strong></p> <p>The challenge with what the OP is trying to do is that the Laplace Transform is a function of the complex variable &quot;s&quot;, so for each possible value of &quot;s&quot; (which is simply the set of all complex numbers) the Laplace Transform would have a complex result with a magnitude and phase.</p> <p>So when one <strong>tries</strong> to plot this, the magnitude and phase could be plotted separately but these would be 3D surface plots showing either the result of the magnitude or the phase over a 2D surface, similar otherwise to how we plot the magnitude and phase of the Fourier Transform as a 2D function over the line representing all frequencies.</p> <p>For the graphical representation of the Laplace Transform, we typically just show the locations where that function goes to infinity (poles) or is zero (zeros). In fact every other location on the surface is uniquely defined by the pole and zero locations alone, so that is all we need to show to define it.</p> <p>This is best shown with an example. Since I already have the graphic for this particular case, consider the time domain function of a decaying sinusoid given by the formula below and the plot below that where we see in the dashed red line the envelope for the decaying function <span class="math-container">$2e^{-0.2t}$</span>. <span class="math-container">$u(t)$</span> is the step function which is <span class="math-container">$0$</span> for time <span class="math-container">$t&lt;0$</span> and <span class="math-container">$1$</span> for time <span class="math-container">$t \ge 0$</span>.</p> <p><span class="math-container">$$ x(t) = u(t)2e^{-0.2t}sin(0.5t)$$</span></p> <p><a href="https://i.sstatic.net/wTeIA.png" rel="noreferrer"><img src="https://i.sstatic.net/wTeIA.png" alt="decaying sinusoid" /></a></p> <p>To get the Laplace Transform (easily), we decompose the function above into exponential form and then use the fundamental transform for an exponential given as :</p> <p><span class="math-container">$$\mathscr{L}\{u(t) e^{-\alpha t}\} = \frac{1}{s+\alpha}$$</span></p> <p>This is the unilateral Laplace Transform (defined for <span class="math-container">$t = 0$</span> to <span class="math-container">$\infty$</span>), and this relationship goes a long way since we can describe the response of any causal linear system using such exponential forms.</p> <p>So the equation above, assuming <span class="math-container">$t&gt;0$</span>, and using Euler's identity becomes :</p> <p><span class="math-container">$$ x(t) = u(t)2e^{-0.2t}sin(0.5t)$$</span></p> <p><span class="math-container">$$ = 2e^{-0.2t}\frac{(e^{+j0.5t}-e^{-j0.5t})}{2j}$$</span></p> <p><span class="math-container">$$ = je^{-0.2t}(e^{+j0.5t}-e^{-j0.5t})$$</span></p> <p><span class="math-container">$$ = je^{-(0.2-j0.5)t} -je^{-(0.2+j0.5)t}$$</span></p> <p>Which we can then easily take the Laplace Transform to get a function of the complex variable <span class="math-container">$s$</span> :</p> <p><span class="math-container">$$ X(s) = \frac{j}{s+0.2+j0.5}-\frac{j}{s+0.2-j0.5} = \frac{1}{(s+0.2+j0.5)(s+0.2-j0.5)}$$</span></p> <p>The graph of the magnitude of this is shown below, which is a surface plot for all values of <span class="math-container">$s$</span>, where <span class="math-container">$s$</span> is a complex variable with real and imaginary components traditionally described as <span class="math-container">$s = \sigma +j\omega$</span>. So we simply choose a particular complex value <span class="math-container">$s$</span>, and plot the magnitude of the result of <span class="math-container">$X(s)$</span>.</p> <p><a href="https://i.sstatic.net/WtPJ8.png" rel="noreferrer"><img src="https://i.sstatic.net/WtPJ8.png" alt="Laplace Transform" /></a></p> <p>The Python code to generate a similar plot is as follows:</p> <pre><code>from mpl_toolkits.mplot3d import Axes3D import matplotlib.cm as cm def s_plane_plot(sfunc, limits = [3,3,10], nsamp = 500): fig = plt.figure() ax = fig.gca(projection = '3d') sigma = np.linspace(-limits[0], limits[0], nsamp) omega = sigma.copy() sigma, omega = np.meshgrid(sigma, omega) s = sigma + 1j*omega surf = ax.plot_surface(sigma, omega, np.abs(sfunc(s)), cmap = cm.flag) ax.set_zlim(0, limits[2]) plt.xlabel('<span class="math-container">$\sigma$</span>') plt.ylabel('<span class="math-container">$j\omega$</span>') fig.tight_layout() def X(s): return 1/((s + .2+.5j)*(s + .2-.5j)) s_plane_plot(X, limits = [1,1,4], nsamp =40) </code></pre> <p><strong>Caveat :</strong> This is indeed the magnitude of the Laplace Transform <strong>if</strong> the Laplace Transform exists. For causal functions, the Laplace Transform does not exist for all values of s whose real part is to the left of the leftmost pole. Why? Because for any values of <span class="math-container">$s$</span> in that region (left of the leftmost pole for causal systems), the Laplace Transform (given by the integral) will not converge (grows to infinity). While for all values of s at and to the right of the leftmost pole, the transform will converge to be the magnitude on the surface plotted above. So the proper graphic would only be valid to the right of the right most pole. This can be confusing as <span class="math-container">$X(s)$</span> certainly converges as was done to make this plot, but it is in the transformation itself that the result cannot be obtained.</p> <p>This is very clear if we consider the envelope in our example function <span class="math-container">$x(t)$</span> which was given by <span class="math-container">$2e^{-0.2t}$</span> for all <span class="math-container">$t&gt;0$</span>, and the Laplace Transform <span class="math-container">$X(s)$</span> for <span class="math-container">$s = -1$</span> :</p> <p><span class="math-container">$$X(s) = \int_0^\infty x(t)e^{-st}dt$$</span></p> <p><span class="math-container">$$X(s= -1) = \int_0^\infty 2e^{-0.2t}e^{t}dt = \int_0^\infty 2e^{0.8t}dt$$</span></p> <p>which will not converge since the function <span class="math-container">$e^{0.8t}$</span> continuously grows larger for larger <span class="math-container">$t$</span>.</p> <p>Hopefully after reading this the OP will no longer feel the need to plot the Laplace Transform, and in practical application a plot of it is never used beyond showing the pole and zero locations. However what is very useful is knowing that the Fourier Transform is the Laplace Transform when <span class="math-container">$s = j\omega$</span>. You can see this if you compare the two equations, and the small breakout in upper right-hand corner of the plot above is also showing this, which is the Frequency Response specifically. The one precaution is that the Fourier Transform is often given as a bilateral function (t extending from <span class="math-container">$-\infty$</span> to <span class="math-container">$\infty$</span>) so to be truly equivalent unless the function is declared to be causal, we must be using the bilateral Laplace Transform for the two to be exactly identical (which is also seldom used). This explains why the Fourier Transform of a sine wave would appear as two impulses at <span class="math-container">$\pm \omega_c$</span> while as you might deduce from the plot above, the Laplace Transform of the sine wave would appear more as tent poles on the <span class="math-container">$j\omega$</span> axis. You would see this as well with the Fourier Transform if you did the transform of <span class="math-container">$u(t)cos(\omega t)$</span> specifically.</p>
121
Laplace transform
$y(0)$ terms in the Laplace transform
https://dsp.stackexchange.com/questions/94361/y0-terms-in-the-laplace-transform
<p>When taking the Laplace transform (in my case, for building a transfer function) of a signal <span class="math-container">$y(n)$</span> the substitution below is often made directly:</p> <p><span class="math-container">$$\mathscr{L} \big\{ y^{(n)}(t) \big\} = s^n \mathscr{L} \big\{ y(t) \big\}$$</span></p> <p>But this ignores some <span class="math-container">$y(0)$</span> terms, for example if we take the Laplace transform <span class="math-container">$$\mathscr{L} \big\{ x(t) \big\} = \int_0^\infty x(t) e^{-st} \ dt$$</span></p> <p>of <span class="math-container">$y'(t)$</span> then we get:</p> <p><span class="math-container">$$-y(0) + s\mathscr{L} \big\{ y(t) \big\}$$</span></p> <p>and with higher-derivaties, these terms get even more complex and start including <span class="math-container">$s$</span>. So I guess we are making some assumptions about <span class="math-container">$y(n)$</span> so that <span class="math-container">$y(0)=0$</span>? If so what are these and why are they valid? (i.e. often we are filtering signals that do not start at <span class="math-container">$0$</span>).</p>
<p>Linear systems, by definition, obey the the superposition principle. As such, depending on what you're doing, the initial conditions can be irrelevant.</p> <p>If you want to know what the overall response is, then including the initial conditions is required because they make up part of the overall response. For example, currently I'm solving the diffusion equation and include initial conditions when using the Laplace transform to solve the differential equations so that I can see how things evolve. As you say, the initial conditions will contribute a component to the transient response, and possibly even the steady state response.</p> <p>In contrast, if you want to know how the system will respond to a specific input, the initial conditions are irrelevant because you can just consider each input independently thanks to superposition, so for simplicity set the initial conditions to zero. For example, whenever I am designing filters or controllers, I ignore all initial conditions and just consider different types of inputs/disturbances independently (stability is not affected by initial conditions).</p> <p>You can initialise digital filters to minimise the transient response due to the initial conditions, which relates to my first previous paragraph of when there is a concern regarding the overall response, and you're right, if you don't initialise the filter with some other state, the assumption is that the input into the filter has been zero for all time prior to you beginning to feed in your signal (this is certainly the behaviour of Matlab's filter function).</p>
122
Laplace transform
Why we take Laplace Transform of functions which converged using Fourier Transform
https://dsp.stackexchange.com/questions/27230/why-we-take-laplace-transform-of-functions-which-converged-using-fourier-transfo
<p>There are several functions for which we know that Fourier Transform will exist but still we calculate its Laplace Transform. Can I know the reason why we need to take Laplace transform for which we know its convergence?</p> <p>Thanks</p>
<p>There is a large class of functions for which both the Fourier transform and the Laplace transform exist, and for which one can be obtained from the other by setting $s=j\omega$. (Note that even when both exist, the latter need not be the case). So for this class of functions, obtaining the Laplace transform from the Fourier transform (or vice versa) does not require any additional work.</p> <p>Example: $$\begin{align}x(t)&amp;=e^{-at}u(t),\quad a&gt;0\\ \text{Fourier transform:}\quad X(j\omega)&amp;=\frac{1}{j\omega +a}\\ \text{Laplace transform:}\quad X(s)&amp;=\frac{1}{s +a},\quad |s|&gt; -a\end{align}$$</p> <p>What the Laplace transform offers is a description in the complex $s$-plane, such as poles and zeros of transfer functions, from which many system properties can be easily deduced. The Fourier transform only shows the behavior on the frequency axis (i.e., the spectrum).</p>
123
Laplace transform
How is causality in Laplace transform related to Fourier transform?
https://dsp.stackexchange.com/questions/95015/how-is-causality-in-laplace-transform-related-to-fourier-transform
<ol> <li><p>Taking the Laplace transform of a system given by a differential equation yields its transfer function <span class="math-container">$H(s)$</span>. The region of convergence of the causal impulse response of the system lies right of the most right pole in the complex plane. Suppose the system is stable. Then the region of convergence will include the imaginary axis. We know that the imaginary axis corresponds to the Fourier transform of the impulse response <span class="math-container">$h(t)$</span> of <span class="math-container">$H(s)$</span>. Does this imply that the Fourier transform will also yield a causal <span class="math-container">$h(t)$</span>?</p> </li> <li><p>In my book it says we can continue the Laplace transform analytically in the complex plane such that the whole Laplace plane is included in the region of convergence except for the poles. This would imply that I can take the Fourier transform of unstable systems by substituting <span class="math-container">$s=j\omega$</span> . How does that even make sense? Im so confused! And how is that related to the impulse response being causal then?</p> </li> </ol>
<p>If we are given a function <span class="math-container">$H(s)$</span> and we're told that it is a Laplace transform, then there are usually many possible corresponding time-domain functions. Let's assume that <span class="math-container">$H(s)$</span> is rational (so we can talk about poles), then the possible regions of convergence (ROCs) are limited by the real parts of the poles. If the ROC is to the right of the rightmost pole, the corresponding time-domain function is right-sided, if the ROC is to the left of the leftmost pole, the time-domain function is left-sided, and if the ROC is between two poles, we have a two-sided time-domain function.</p> <p>Let's now assume that there are no poles on the imaginary axis. If we set <span class="math-container">$s=j\omega$</span> we obtain a valid Fourier transform <span class="math-container">$H(j\omega)$</span>. Its inverse Fourier transform equals the time domain function corresponding to the (unique) ROC of <span class="math-container">$H(s)$</span> which includes the imaginary axis.</p> <p><strong>Example:</strong></p> <p><span class="math-container">$$H(s)=\frac{2a}{a^2-s^2}=\frac{1}{s+a}-\frac{1}{s-a},\qquad a&gt;0\tag{1}$$</span></p> <p>Depending on the ROC, <span class="math-container">$H(s)$</span> is the Laplace transform of the following three time-domain functions:</p> <p><span class="math-container">\begin{align*} h_1(t) &amp;= \left[e^{-at}-e^{at}\right]u(t),&amp; \textrm{Re}(s)&gt;a \\ h_2(t) &amp;= \left[e^{at}-e^{-at}\right]u(-t),&amp; \textrm{Re}(s)&lt;-a \\ h_3(t) &amp;= e^{-at}u(t)+e^{at}u(-t)=e^{-a|t|},&amp; -a&lt;\textrm{Re}(s)&lt;a \end{align*}</span></p> <p>The inverse Fourier transform of</p> <p><span class="math-container">$$H(j\omega)=\frac{2a}{a^2+\omega^2}\tag{2}$$</span></p> <p>is given by <span class="math-container">$h_3(t)$</span>, which corresponds to the only stable impulse response, i.e., to the ROC which includes the imaginary axis. The functions <span class="math-container">$h_1(t)$</span> and <span class="math-container">$h_2(t)$</span> don't have a Fourier transform.</p> <p>Analytic continuation of <span class="math-container">$H(s)$</span> to all points of the complex plane except for the poles does not affect any of the above, and it is not related to the Fourier transform.</p>
124
Laplace transform
Confusion in initial condition of differential equation using Laplace transform transform
https://dsp.stackexchange.com/questions/69667/confusion-in-initial-condition-of-differential-equation-using-laplace-transform
<p>I'm confused in solving linear constant coefficients differential equations (LCCDEs) by Laplace transform if initial conditions are given at time</p> <ol> <li>just before <span class="math-container">$t=0$</span></li> <li>just after <span class="math-container">$t=0$</span></li> <li>exactly at <span class="math-container">$t=0$</span></li> </ol> <p>Is method of solving LCCDE by Laplace transform is same in all three cases or it's different?</p> <p>I know it's more of mathematical question and <a href="https://math.stackexchange.com/q/3784137/70664">I asked this question in mathematics stack exchange</a> but I didn't get answers, so I ask it here as it is equally a problem of systems differential equation.<img src="https://i.sstatic.net/guba1.jpg" alt="enter image description here" />!</p>
<p>Initial conditions are always given at <span class="math-container">$t=0^-$</span>, because they define the state of the system <em>before</em> any input is applied, and - by definition - the input is applied at <span class="math-container">$t=0$</span>. The state at <span class="math-container">$t=0^+$</span> is determined by the initial conditions as well as by the input signal.</p> <p>The unilateral Laplace transform can be used to solve LCCDEs with initial conditions <span class="math-container">$y(0^-), y'(0^-),\ldots$</span> because of the definition</p> <p><span class="math-container">$$\mathcal{L}\{f(t)\}=F(s)=\int_{0^{\color{red} -}}^{\infty}f(t)e^{-st}dt\tag{1}$$</span></p> <p>from which it follows that</p> <p><span class="math-container">$$\mathcal{L}\{f'(t)\}=sF(s)-y(0^-)\tag{2}$$</span></p> <p>Note that it's common to write initial conditions as <span class="math-container">$y(0),y'(0),\ldots$</span>, when actually <span class="math-container">$t=0^{-}$</span> is meant.</p> <p>EDIT: Concerning the example in the book: if there is no Dirac delta impulse in the current <span class="math-container">$i(t)$</span> at <span class="math-container">$t=0$</span>, the capacitor voltage <span class="math-container">$v_c(t)$</span> cannot jump at <span class="math-container">$t=0$</span>. Consequently, <span class="math-container">$v_c(0^-)=v_c(0)=v_c(0^+)$</span> must hold.</p> <p>In general, the initial conditions define values of the output signal and its derivatives right before the source signal is switched on. It can be the case that the limits of the output and its derivatives exist at <span class="math-container">$t=0$</span>. If that is the case, it doesn't make a difference if we use <span class="math-container">$t=0^{-}$</span> or <span class="math-container">$t=0$</span> or <span class="math-container">$t=0^{+}$</span> because the corresponding function values are all the same. If the output signal or its derivatives are discontinuous at <span class="math-container">$t=0$</span>, then the distinction becomes important, and the initial conditions define the values just before the discontinuity, i.e., at <span class="math-container">$t=0^{-}$</span>.</p>
125
Laplace transform
Bilateral Laplace transform and existence of Fourier transform
https://dsp.stackexchange.com/questions/50462/bilateral-laplace-transform-and-existence-of-fourier-transform
<p>I was reading from Athanosios Papoulis' "The Fourier integral and its applications." and they referenced the bilateral Laplace transform and Fourier Transform as:</p> <p>$$F(p)=\int_{-\infty}^{\infty}e^{-pt}f(t)dt$$ $$F(\omega)=\int_{-\infty}^{\infty}e^{-j\omega t}f(t)dt$$</p> <p>and stability indicates that the real part of $p$ must lie between $a$ an $b$.(For stability I'm guessing?)</p> <p>As per the textbook, if we take the values of $p$ to be purely real and ignore the imaginary axis, then $F(\omega)$ doesn't even exist.</p> <p>logically, I can't see how that could happen. I was wondering if anyone could explain how that is so. As well as why we limit the real part of $p$.(guessing it's similar to the unit circle in Z transform).</p>
<p>The bilateral Laplace transform converges in a vertical strip $a&lt;\text{Re}\{p\}&lt;b$, called the region of convergence (ROC). Compare this to the bilateral $\mathcal{Z}$-transform which converges in an annulus centered at the origin of the complex plane: $r_1&lt;|z|&lt;r_2$. For causal signals we have $b=\infty$ and $r_2=\infty$.</p> <p>If the vertical strip $a&lt;\text{Re}\{p\}&lt;b$ does not include the imaginary axis, i.e., if $0&lt;a&lt;b$ or $a&lt;b&lt;0$, the bilateral Laplace transform does not converge for $p=j\omega$ because the imaginary axis is not inside the ROC. Consequently, the Fourier transform does not exist because the corresponding integral does not converge. For the $\mathcal{Z}$-transform the analogous case would be that the ROC does not include the unit circle, in which case the discrete time Fourier transform does not exist.</p>
126
Laplace transform
Causal Signal - Fourier Transform or Laplace Transform
https://dsp.stackexchange.com/questions/40201/causal-signal-fourier-transform-or-laplace-transform
<p>I am dealing with a physics problem which is related to signal processing. The problem requires me to calculate the instantaneous force acting on a body which depends on some physical parameter $x$. Assume that $x(t)$ is periodic in time for the moment. Since $x(t)$ is periodic, then it can be expanded as a Fourier series with different frequency components (and it doesn't really matter if $x(t)$ is causal). The calculation for the instantaneous force involves adding a complex phase shift (which may depend on the frequency) to each of the frequency component. To do that, I can use the convolution theorem and take the convolution of $x(t)$ with some kernel $\kappa(t)$ whose Fourier transform gives me the required phase shifts, i.e. $\tilde{\kappa}(\omega) \propto e^{i\delta(\omega)}$ where $\delta(\omega)$ is the phase shift. </p> <p>Now if in reality $x(t)$ is not periodic and is causal since I only know its values in the past, can I still apply the same kernel to get the instantaneous force? I have been told that I should use Laplace transform instead of Fourier transform. I see the point of it being bilateral by definition, but I am not sure how it is actually different to Fourier transform. Does applying the convolution theorem to a causal signal still give me the desired phase shifts?</p>
<p>What you want is an all-pass filter with frequency response</p> <p>$$H(\omega)=e^{j\phi(\omega)}\tag{1}$$</p> <p>where $\phi(\omega)$ is the desired phase shift (and $j$ is how we denote the imaginary unit over here). This system is called an all-pass filter because clearly $|H(\omega)|=1$ holds.</p> <p>The type of input signal is irrelevant, it can be periodic, non-periodic, causal, or non-causal; if you filter it with a linear time-invariant (LTI) filter with a frequency response given by $(1)$ then the desired phase shift will be achieved.</p> <p>Your problem is the (causal and stable) realization of such a filter. In general, for a given phase shift $\phi(\omega)$ the frequency response given by $(1)$ cannot be implemented exactly; it can only be approximated.</p>
127
Laplace transform
What are the advantages of Laplace Transform vs Fourier Transform in signal theory?
https://dsp.stackexchange.com/questions/45910/what-are-the-advantages-of-laplace-transform-vs-fourier-transform-in-signal-theo
<p>What are the advantages of Laplace Transform vs Fourier Transform in signal theory?</p>
<p>Laplace transforms can capture the transient behaviors of systems. Fourier transforms only capture the steady state behavior. Of course, Laplace transforms also require you to think in complex frequency spaces, which can be a bit awkward, and operate using algebraic formula rather than simply numbers.</p> <p>If you want to see the power distribution of a signal over time, Fourier transformations are often the easiest way to do it. However, if you want to understand what a system does when you flip a light switch, you typically need Laplace transforms.</p>
128
Laplace transform
What are the advantages and disadvantages of Laplace transform over Z transform?
https://dsp.stackexchange.com/questions/31384/what-are-the-advantages-and-disadvantages-of-laplace-transform-over-z-transform
<p>Laplace transform for continuous signal $x(t)$ is given by</p> <p>$$ X(s) = \int\limits_{-\infty}^{+\infty} x(t) e^{-s t} dt. \quad (1) $$</p> <p>Z-transform for discrete signal $x(n)$ is given by</p> <p>$$ X(z) = \sum\limits_{n=-\infty}^{+\infty} x[n] z^{-n}. \quad (2)$$</p> <p>I can say that only difference between the two transform is Laplace used for continuous signal and Z transform for discrete signal .</p> <p>But what are the advantages and disadvantages of one transform over the other? Where are they used?</p>
<p>Both transforms are equivalent tools, but the Laplace transform is used for continuous-time signals, whereas the $\mathcal{Z}$-transform is used for discrete-time signals (i.e, sequences).</p> <p>You can see that they are equivalent by using the continuous-time representation of a discrete-time signal, and then applying the Laplace transform to that signal. The continuous-time representation of a discrete-time signal is a weighted Dirac comb:</p> <p>$$x_d(t)=\sum_{n=-\infty}^{\infty}x[n]\delta(t-nT)\tag{1}$$</p> <p>where $x[n]$ is the discrete-time signal, $\delta(t)$ is the Dirac delta impulse, and $T$ is the sampling period.</p> <p>The (bilateral) Laplace transform of $(1)$ is</p> <p>$$\begin{align}X_d(s)&amp;=\int_{-\infty}^{\infty}x_d(t)e^{-st}dt\\&amp;=\int_{-\infty}^{\infty}\sum_{n=-\infty}^{\infty}x[n]\delta(t-nT)e^{-st}dt\\&amp;=\sum_{n=-\infty}^{\infty}x[n]\int_{-\infty}^{\infty}\delta(t-nT)e^{-st}dt\\&amp;=\sum_{n=-\infty}^{\infty}x[n]e^{-snT}\tag{2}\end{align}$$</p> <p>which equals the (bilateral) $\mathcal{Z}$-transform of $x[n]$</p> <p>$$X(z)=\sum_{n=-\infty}^{\infty}x[n]z^{-n}\tag{3}$$</p> <p>for $z=e^{sT}$.</p>
129
Laplace transform
Laplace Transform of $-e^{-at}u(-t)$
https://dsp.stackexchange.com/questions/27287/laplace-transform-of-e-atu-t
<p>I have found a problem in applying Laplace Transform to $-e^{-at}u(-t)$ I am doing these steps:</p> <p>$$ = - \int_{-\infty}^{+\infty} e^{-at}u(-t) e^{-st}dt$$ $$ = - \int_{-\infty}^{0} e^{-at} e^{-st}dt$$ $$ = - \int_{-\infty}^{0} e^{-(a+s)t}dt$$ $$ = - [-\frac{1}{a+s} e^{-(a+s)t}]|_{-\infty}^{0}$$ $$ = - [-\frac{1}{a+s} (e^{-(a+s)0}-e^{-(a+s)-\infty})]$$</p> <p>$$ = - [-\frac{1}{a+s} (1- \infty)]$$</p> <p>$$ = \infty$$ Can anyone help me why it is showing like that.I check it on internet and all the books are showing the answer is $\frac{1}{s+a}$</p>
130
Laplace transform
Laplace Transform: zeros and corresponding impulse response $h(t)$
https://dsp.stackexchange.com/questions/71611/laplace-transform-zeros-and-corresponding-impulse-response-ht
<h2>Poles and the impulse response</h2> <p>If our impulse response is in the form :</p> <p><span class="math-container">$$h(t) = e^{-\sigma_0 t}\cos(\omega_0 t) \, u(t)$$</span></p> <p>(where <span class="math-container">$u(t)$</span> is the unit step function)</p> <p>And its Laplace transform is :</p> <p><span class="math-container">$$H(s) = \frac{N(s)}{D(s)} = \int_{0}^{+\infty} h(t)e^{-st}dt$$</span> <span class="math-container">$$s = \sigma + j\omega$$</span></p> <p>Poles are values of <span class="math-container">$s$</span> so that <span class="math-container">$$D(s) = 0 \rightarrow H(s) = +\infty $$</span> <strong>But to understand this</strong>, I prefer to look at the integral : it will go to infinity (poles) when <span class="math-container">$s$</span> reflects components of <span class="math-container">$h(t)$</span>. In a way, <span class="math-container">$e^{-st}$</span> &quot;probes&quot; <span class="math-container">$h(t)$</span>. Indeed :</p> <ul> <li><p>A single real pole (<span class="math-container">$s = -\sigma_0$</span>) means <span class="math-container">$h(t) = e^{-\sigma_0t}u(t)$</span> because : <span class="math-container">$$\int_{0}^{+\infty} e^{-\sigma_0t}e^{-(-\sigma_0)t}dt = \int_{0}^{+\infty} 1dt = +\infty $$</span>.</p> </li> <li><p>Complex conjugate poles (<span class="math-container">$s = -\sigma_0 \pm j\omega_0$</span>) mean <span class="math-container">$h(t)$</span> is an exponentially decaying sinusoid (say <span class="math-container">$h(t) = e^{-\sigma_0t}\cos(\omega_0t)$</span>) because : <span class="math-container">$$\int_{0}^{+\infty} e^{-\sigma_0t}\cos(\omega_0t)e^{-(-\sigma_0)t}e^{-j\omega t}dt = \int_{0}^{+\infty}\cos(\omega_0t)e^{-j\omega t}dt $$</span> which is infinite at <span class="math-container">$\omega = \pm\omega_0$</span> (Fourier transform of <span class="math-container">$h(t)$</span> without its exponential component, which is a sinusoid).</p> </li> <li><p>Complex conjugate poles with <span class="math-container">$\sigma = 0$</span> (<span class="math-container">$s = \pm j\omega_0$</span>) mean <span class="math-container">$h(t)$</span> has no decaying component (say <span class="math-container">$h(t) = \cos(\omega_0t) u(t)$</span>) because : <span class="math-container">$$\int_{0}^{+\infty} \cos(\omega_0t)e^{-j\omega t}dt$$</span> which is infinite at <span class="math-container">$\omega = \pm\omega_0$</span> (Fourier transform of <span class="math-container">$h(t)$</span> which is a sinusoid).</p> </li> </ul> <h2>Zeros : a dirac in the impulse response ?</h2> <p>Now, let's look at <span class="math-container">$H(s)$</span> for a Notch filter, as shown in ch.32,p.17 of &quot;<a href="http://www.dspguide.com/CH32.PDF" rel="nofollow noreferrer">The Scientist and Engineer's Guide to DSP</a>&quot; and see if similar reasoning on the integrals can be done.</p> <p><a href="https://i.sstatic.net/ZSrpt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZSrpt.png" alt="Notch filter" /></a></p> <p>Let's use the following filter (figure above for illustration only, I use different poles and zeros here) :</p> <p><span class="math-container">$$H(s) = \frac{s^2+1}{(s-(-1+i))(s-(-1-i))}$$</span></p> <p>This filter has 2 poles and 2 zeros :</p> <ul> <li>Zeros : <span class="math-container">$z_1,z_2 =\pm i$</span></li> <li>Poles : <span class="math-container">$p_1,p_2 =-1 \pm i$</span></li> </ul> <p>Let's find <span class="math-container">$h(t)$</span> and see why the integral would indeed go to 0 or <span class="math-container">$+\infty$</span> for these values of zeros and poles, respectively.</p> <p>If it makes sense, this <a href="https://www.symbolab.com/solver/inverse-laplace-calculator/inverse%20laplace%20%5Cleft(%5Cfrac%7Bs%5E%7B2%7D%2B1%7D%7B%5Cleft(s%2B1%2Bi%5Cright)%5Cleft(s%2B1-i%5Cright)%7D%5Cright)" rel="nofollow noreferrer">tool</a> gives the following inverse Laplace transform for <span class="math-container">$H(s)$</span> :</p> <p><span class="math-container">$$h(t) = \delta(t) - 2e^{-t}\cos(t) u(t) + e^{-t}\sin(t) u(t)$$</span></p> <ul> <li><p>Poles : for <span class="math-container">$s=p_1$</span> or <span class="math-container">$p_2$</span> in the Laplace transform, the exponentials of h(t) get canceled and remain the Fourier transform of some sinusoid which is indeed infinite at <span class="math-container">$\omega = \pm 1$</span> (I'm not discussing the <span class="math-container">$\delta(t)$</span> but I suppose it won't change this result).</p> </li> <li><p>Zeros : for <span class="math-container">$s=z_1$</span> or <span class="math-container">$z_2$</span> in the Laplace transform, the result is 0 if real part and imaginary of the Laplace transform are 0. Real part is :</p> </li> </ul> <p><span class="math-container">$$\int_{0}^{+\infty} (\delta(t) - 2e^{-t}\cos(t)+e^{-t}\sin(t))\cos(t)dt$$</span></p> <p><span class="math-container">$$=\int_{0}^{+\infty} \delta(t)\cos(t)dt + \int_{0}^{+\infty} (- 2e^{-t}\cos(t)+e^{-t}\sin(t))\cos(t)dt$$</span></p> <p>with</p> <p><span class="math-container">$$\int_{0}^{+\infty} (- 2e^{-t}\cos(t)+e^{-t}\sin(t))\cos(t)dt = -1$$</span></p> <p>Imaginary part is :</p> <p><span class="math-container">$$\int_{0}^{+\infty} \delta(t)\sin(t)dt + \int_{0}^{+\infty} (- 2e^{-t}\cos(t)+e^{-t}\sin(t))\sin(t)dt$$</span></p> <p>with</p> <p><span class="math-container">$$\int_{0}^{+\infty} (- 2e^{-t}\cos(t)+e^{-t}\sin(t))\sin(t)dt = 0$$</span></p> <h2>Questions</h2> <ol> <li><strong>If the inverse Laplace transform is correct, how to handle</strong> <span class="math-container">$\int_{0}^{+\infty} \delta(t)\cos(t)dt$</span> and <span class="math-container">$\int_{0}^{+\infty} \delta(t)\sin(t)dt$</span> <strong>to show that</strong> <span class="math-container">$H(s)$</span> <strong>is indeed 0 at</strong> <span class="math-container">$z_1$</span> <strong>and</strong> <span class="math-container">$z_2$</span> ?</li> <li><strong>If all of this is correct, what does it (physically) mean for an impulse response to have a dirac in its expression ? I thought impulse response of most physical systems was only a combination of decaying exponentials and sinusoids ?</strong></li> </ol>
<p>For you first question you can use the <a href="https://mathworld.wolfram.com/DeltaFunction.html" rel="nofollow noreferrer">following</a></p> <p><span class="math-container">$$ \int_{-\infty}^{\infty} \delta (t-a)\,f(t)\,dt = f(a), $$</span></p> <p>with <span class="math-container">$f(t)$</span> any function. In your case those integrals would thus yield the values one and zero respectively.</p> <p>For your second question I will only considering linear time invariant systems. In that case the impulse response of such a system can only contains a Dirac delta function if the transfer function of that system has a numerator of the same order as the denominator. Namely, any transfer function of the form</p> <p><span class="math-container">$$ G(s) = \frac{b_n\,s^n + b_{n-1}\,s^{n-1} + \cdots + b_1\,s + b_0}{s^n + a_{n-1}\,s^{n-1} + \cdots + a_1\,s + a_0}, $$</span></p> <p>with <span class="math-container">$b_n \neq 0$</span> can also be written as</p> <p><span class="math-container">$$ G(s) = b_n + \frac{b'_{n-1}\,s^{n-1} + \cdots + b'_1\,s + b'_0}{s^n + a_{n-1}\,s^{n-1} + \cdots + a_1\,s + a_0}, $$</span></p> <p>with <span class="math-container">$b'_k = b_k - b_n\,a_k$</span>. The inverse Laplace transform of the constant <span class="math-container">$b_n$</span> would contribute a Dirac delta term. For the remaining part of the transfer function one could use partial fraction expansion to show that it can't contribute a Dirac delta term.</p> <p>If a physical system would have a numerator of the same order as the denominator then it would require that the output of the system is directly affected by the input. An example of such physical system might be some electrical motor where you input a voltage and measure the angular position with some voltage leakage from the input signal to the output. However, most physical systems have numerator of a lower order as the denominator. It is more likely that you might encounter equal order numerators and denominators in digital filters (though, those would be z-domain and not s-domain, but roughly the same argument holds) such as notch filters. Those filters are however often used in series with physical systems, so their combined transfer function would also have a lower order numerator.</p>
131
Laplace transform
Laplace transform : integral vs poles &amp; zeros
https://dsp.stackexchange.com/questions/71560/laplace-transform-integral-vs-poles-zeros
<p>If Laplace transform is expressed as :</p> <p><span class="math-container">$$\int_{-\infty}^{+\infty} h(t)e^{-st}dt $$</span></p> <p>with :</p> <p><span class="math-container">$$s = \sigma + j\omega$$</span></p> <p>and <span class="math-container">$h(t)$</span> an impulse response expressed as :</p> <p><span class="math-container">$$h(t) = Ae^{-\sigma_0t}\cos(\omega_0t+\phi) = e^{-\sigma_0t}\cos(\omega_0t)$$</span> (<span class="math-container">$A=1$</span> and <span class="math-container">$\phi = 0$</span> for simplification, <span class="math-container">$h(t)=0$</span> if <span class="math-container">$t&lt;0$</span>)</p> <p>Then, each vertical line (parallel to the imaginary axis) in the <span class="math-container">$s$</span> plane corresponds to the Fourier transform of <span class="math-container">$f(t) = h(t)e^{-\sigma t}$</span> for a fixed <span class="math-container">$\sigma$</span>.</p> <p>For <span class="math-container">$\sigma = -\sigma_0$</span>, the decaying exponential of <span class="math-container">$h(t)$</span> is canceled and we get the Fourier transform* of <span class="math-container">$h(t) = \cos(\omega_0t)$</span>, that is : diracs at <span class="math-container">$\omega_0$</span> and <span class="math-container">$-\omega_0$</span> (not accurate, see (*) just below), hence two poles : <span class="math-container">$-\sigma_0 + j\omega_0$</span> and <span class="math-container">$-\sigma_0 - j\omega_0$</span> as in the following picture (illustration only, poles not located correctly) :</p> <p><a href="https://i.sstatic.net/iprcE.gif" rel="nofollow noreferrer" title="maximintergrated.com"><img src="https://i.sstatic.net/iprcE.gif" alt="Poles" title="maximintergrated.com" /></a></p> <p>Indeed, we can understand that :</p> <p><em>(*)Please, note that the following is not accurate : since <span class="math-container">$h(t) = 0$</span> if <span class="math-container">$t&lt;0$</span>, we should use the unilateral Laplace transform, not bilateral ! So here we would get the unilateral Fourier transform of a sinusoid, not the bilateral (with diracs only) one ! To see what this would be, please see the <a href="http://www.thefouriertransform.com/pairs/rightSidedSinusoids.php" rel="nofollow noreferrer">link</a> given at the end of the accepted answer</em></p> <p><span class="math-container">$$\int_{-\infty}^{+\infty} h(t)e^{-j\omega t}dt $$</span> <span class="math-container">$$= \int_{-\infty}^{+\infty} \cos(\omega_0t)e^{-j\omega t}dt$$</span> <span class="math-container">$$= \int_{-\infty}^{+\infty} \frac{e^{j\omega_0t}-e^{-j\omega_0t}}{2}e^{-j\omega t}dt$$</span> <span class="math-container">$$= \frac{1}{2}\int_{-\infty}^{+\infty} e^{j(\omega_0-\omega)t}-e^{-j(\omega_0+\omega)t}dt$$</span></p> <p>If <span class="math-container">$\omega = \omega_0$</span> or <span class="math-container">$-\omega_0$</span>, then the integral would blow up due to the <span class="math-container">$$\int_{-\infty}^{+\infty} e^0dt $$</span> member, hence the poles in the s plane.</p> <p>So as shown in ch.32, p.24 of <a href="http://www.dspguide.com/CH32.PDF" rel="nofollow noreferrer">The Scientist and Engineer's Guide to DSP</a> (see figures below), with Laplace transform we multiply <span class="math-container">$h(t)$</span> with <span class="math-container">$e^{-st}$</span> = <span class="math-container">$e^{-\sigma}e^{-j\omega}$</span>, that is we multiply <span class="math-container">$h(t)$</span> with sinusoids that are either :</p> <ul> <li>(a) Exponentially decaying (<span class="math-container">$\sigma$</span> &gt; 0)</li> <li>(b) Stable (<span class="math-container">$\sigma = 0$</span>)</li> <li>(c) Exponentially growing slower than our impulse response decay (<span class="math-container">$ -\sigma_0 &lt; \sigma &lt; 0$</span>)</li> <li>(d) Exponentially growing, compensating our impulse response decay (<span class="math-container">$\sigma = -\sigma_0$</span>) : OK, as studied above.</li> <li>(e) Exponentially growing quicker (<span class="math-container">$\sigma &lt; - \sigma_0$</span> and <span class="math-container">$\sigma &lt; 0$</span>)</li> </ul> <p>(letters correspond to pairs of points in the s plane shown in figures below, always at a fixed <span class="math-container">$\omega$</span> or <span class="math-container">$-\omega$</span> value)</p> <p><a href="https://i.sstatic.net/yL74t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yL74t.png" alt="Different values of s..." /></a> <a href="https://i.sstatic.net/kL5ke.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kL5ke.png" alt="Lead to different values of the integral" /></a></p> <p>I understand case d : since we cancel the exponential part, we get only the <em>(unilateral !!)</em> Fourier transform of a sinusoid. That is : infinite at <span class="math-container">$\omega_0$</span> and <span class="math-container">$-\omega_0$</span> hence the poles (though I don't know why we have a continuous function of omega with infinite values at <span class="math-container">$\omega_0$</span> and <span class="math-container">$-\omega_0$</span> instead of diracs as in the original Fourier transform of a sinusoid <strong>-&gt; Because we use unilateral Laplace hence Fourier, see end of accepted answer !</strong>).</p> <p>Case a, c and e are intuitive. In case a, we multiply <span class="math-container">$h(t)$</span> with a decaying exponential. The integral will be some finite complex value (for all values of <span class="math-container">$\sigma &gt; 0$</span>. In case c, we multiply by an exponential growing slower than the decaying exponential of <span class="math-container">$h(t)$</span>, hence some finite complex value for the integral (for all values of <span class="math-container">$-\sigma_0 &lt; \sigma &lt; 0$</span>). In case e, we multiply the <span class="math-container">$h(t)$</span> by an exponential that grows quicker than exponential of <span class="math-container">$h(t)$</span> decays : hence integral does not converge (for all values of <span class="math-container">$\sigma &lt; -\sigma_0$</span>).</p> <p><strong>But for case b, I can't get the intuition of why the integral would be zero as shown with the area under the curve (red in the above figures) ? In other words, I understand the vertical line in the s plane at <span class="math-container">$\sigma = -\sigma_0$</span>, it is the Fourier transform of <span class="math-container">$h(t)e^{-\sigma_0 t}$</span> so it is Fourier transform of <span class="math-container">$h(t)$</span> once its exponential component is removed, hence 2 poles due to sinusoid. We get poles whenever <span class="math-container">$e^{-st}$</span> is identical (compensates) to the impulse response. But what would cause Fourier transform of <span class="math-container">$h(t)e^{-\sigma t}$</span> to be 0 at some <span class="math-container">$\omega$</span> ? For which <span class="math-container">$h(t)$</span> and how it would impact the area under the curve (integral) ?</strong></p>
<p>The original post has been updated to add information of why the integral diverges or has some finite complex value.</p> <p>Figure 32.5 (original question) can't be understood (especially &quot;b. Exact cancellation&quot;) if we consider :</p> <p><span class="math-container">$$ h(t) = e^{-\sigma_0t}\cos{\omega_0t} $$</span></p> <p>(<span class="math-container">$h(t) = 0$</span> for <span class="math-container">$t&lt;0$</span>)</p> <p><span class="math-container">$h(t)$</span> in fig. 32-5 is not a simple exponentially decaying sinusoid: if it was, the integral could indeed not equal 0 for any value of s, as raised by the original question.</p> <p>Instead, as pointed out by Matt L., <span class="math-container">$h(t)$</span> is the impulse response of a Notch filter. How does this help in understanding why the integral would go to 0 for some <span class="math-container">$s$</span> ? Well, this impulse response has the peculiarity of having a dirac in it (and also some combination of exponentially decaying sinusoids) ! And if you pay attention to fig.32-5, this dirac is indeed shown in the impulse response (missed this thinking it was the ordinate axis...), see figure below :</p> <p><a href="https://i.sstatic.net/fkAwe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fkAwe.png" alt="Dirac, not ordinate axis !" /></a></p> <p>And it is the area under this dirac that will compensate for the area under the exponentially decaying sinusoids components of <span class="math-container">$h(t)$</span> for the appropriate values of <span class="math-container">$s$</span>, hence the zeros !</p> <p>For a more detailed explanation of both the calculation involved in this, as well as the physical meaning of a dirac in an impulse response, please see the answers given to <a href="https://dsp.stackexchange.com/questions/71611/laplace-transform-zeros-and-corresponding-impulse-response-ht">this question</a>.</p> <p>Another question was the following :</p> <blockquote> <p>(though I don't know why we have a continuous function of omega with infinite values at ω0 and −ω0 instead of diracs as in the original Fourier transform of a sinusoid).</p> </blockquote> <p>I think this is due to having an unilateral Laplace transform instead of bilateral. Indeed, see in this <a href="http://www.thefouriertransform.com/pairs/rightSidedSinusoids.php" rel="nofollow noreferrer">example</a> the unilateral Fourier transform of sine waves. It's as if we multiplied the sine wave with a unit step function. So the unilateral Fourier transform of a sine wave is the Fourier transform of a sine wave convoluted by the Fourier transform of a unit step function (see details in given link). This is why in a given vertical slice (for a fixed <span class="math-container">$\sigma$</span>) of <span class="math-container">$s$</span> plane, we won't get the usual Fourier transform, but the unilateral one, which is a bit different.</p>
132
Laplace transform
What is the inverse Laplace transform of squared denominator term?
https://dsp.stackexchange.com/questions/60655/what-is-the-inverse-laplace-transform-of-squared-denominator-term
<p>Referring to the image below, what would the inverse Laplace transform be? I can't seem to find any tables that include this case.</p> <p><a href="https://i.sstatic.net/OcDj5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OcDj5.png" alt="enter image description here"></a></p>
<p>This is quite straightforward to solve. Either just use <a href="https://en.wikipedia.org/wiki/Laplace_transform#Table_of_selected_Laplace_transforms" rel="nofollow noreferrer">this table</a> where you can directly find the corresponding result, or "derive" it yourself with very basic knowledge of the Laplace transform.</p> <p>You should know that</p> <p><span class="math-container">$$\mathcal{L}\{u(t)\}=\frac{1}{s}\tag{1}$$</span></p> <p>Integration is equivalent to multiplication with <span class="math-container">$1/s$</span>, so integrating <span class="math-container">$(1)$</span> gives</p> <p><span class="math-container">$$\mathcal{L}\{t\cdot u(t)\}=\frac{1}{s^2}\tag{2}$$</span></p> <p>Replacing <span class="math-container">$s$</span> by <span class="math-container">$s+a$</span> is equivalent to multiplication with <span class="math-container">$e^{-at}$</span>. Consequently, we have</p> <p><span class="math-container">$$\mathcal{L}^{-1}\left\{\frac{2}{(s+2)^2}\right\}=2\,t\,e^{-2t} u(t)\tag{3}$$</span></p>
133