category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
convolution
|
Convolution of shifted signal
|
https://dsp.stackexchange.com/questions/52560/convolution-of-shifted-signal
|
<p>If <span class="math-container">$y(t) = x(t)*h(t)$</span>, then what is the expression for <span class="math-container">$y(t+a)$</span>?</p>
<p>Is it <span class="math-container">$x(t+a)*h(t+a)$</span> or <span class="math-container">$x(t+a)*h(t)$</span>?</p>
|
<p>From your confusion of <span class="math-container">$x(t+a) \star h(t+a)$</span> vs <span class="math-container">$h(t) \star x(t+a)$</span>
I guess that a little help on the argument manipulations on functions and convolutions could be appropriate here, moving in simple examples:</p>
<p>First let us express the usual simplistic case. Consider the relation:
<span class="math-container">$$ y(t) = h(t) \cdot x(t) + g(t) \tag{1}$$</span></p>
<p>then manipulations on the argument <span class="math-container">$t$</span> is applied to all functions on both sides</p>
<p><span class="math-container">$$ y(t+a) = h(t+a) \cdot x(t+a) + g(t+a)$$</span></p>
<p>or an arbitrary transform on <span class="math-container">$t$</span> would similary be:
<span class="math-container">$$ y(\phi(t)) = h(\phi(t)) \cdot x(\phi(t)) + g(\phi(t))$$</span></p>
<p>Now consider the case where two functions convolved to produce the third:
<span class="math-container">$$y(t) = \int_{-\infty}^{\infty} h(\tau) x(t-\tau) d\tau $$</span></p>
<p>which is abbreviated as
<span class="math-container">$$ y(t) = h(t) \star x(t) \tag{2} $$</span></p>
<p>Now be careful to interpret the case-2. The variable <span class="math-container">$t$</span> shows in all functions as an argument but you may not apply the transform on <span class="math-container">$t$</span> as you did with the case-1, so assume you have an <strong>arbitrary</strong> transform on <span class="math-container">$t$</span> as <span class="math-container">$\phi(t)$</span> then</p>
<p><span class="math-container">$$ y(\phi(t)) \neq h(\phi(t)) \star x(\phi(t)) \tag{3} $$</span></p>
<p>For example, as in your case, if <span class="math-container">$\phi(t) = t+a$</span> then you get </p>
<p><span class="math-container">$$ y(t+a) \neq h(t+a) \star x(t+a)$$</span>
but
<span class="math-container">$$ y(t+a) = h(t) \star x(t+a) = h(t+a) \star x(t) $$</span></p>
<p>The justification of this can (only) be seen when you consider the <strong>integral</strong> definition of the convolution operator:</p>
<p><span class="math-container">$$
\begin{align}
y(t) &= h(t) \star x(t) \\
& = \int_{-\infty}^{\infty} h(\tau) x(t-\tau)d\tau \\
y(t+a) & = \int_{-\infty}^{\infty} h(\tau) x((t+a)-\tau)d\tau \\
& = h(t) \star x(t+a) \\
\end{align}
$$</span></p>
<p>Note that since the <strong>live</strong> variable inside the integral only happens in one function (<span class="math-container">$x(t-\tau)$</span> in this case) then a change in <span class="math-container">$t$</span> will only affect one of them and you get:</p>
<p><span class="math-container">$$ y(t+a) = h(t) \star x(t+a) $$</span>
or from commutativity of convolution you get
<span class="math-container">$$ y(t+a) = h(t+a) \star x(t) $$</span></p>
<p>So this provides the answer you were looking for. However, it's not over. Because the following case represents an exception:</p>
<p><span class="math-container">$$ y(-t) \neq h(t) \star x(-t) $$</span>
but
<span class="math-container">$$ y(-t) = h(-t) \star x(-t) \tag{4} $$</span></p>
<p>So how to see this case-4. Again, using the integral definition :</p>
<p>Assuming that <span class="math-container">$y(t) = h(t) \star x(t)$</span>, then compute the convolution between two new signals <span class="math-container">$g(t)=h(-t)$</span> and <span class="math-container">$z(t)=x(-t)$</span> as:
<span class="math-container">$$
\begin{align}
w(t) &= g(t) \star z(t) \\
& = \int_{-\infty}^{\infty} g(\tau) z(t-\tau)d\tau &g(\tau)=h(-\tau),z(t-\tau)=x(-(t-\tau)) \\
& = \int_{-\infty}^{\infty} h(-\tau) x(-(t-\tau))d\tau &\text{ let } \tau'=-\tau \\
& = -\int_{\infty}^{-\infty} h(\tau') x(-(t+\tau'))d\tau' &\text{ replace } \tau' \text{ with } \tau \\
& = \int_{-\infty}^{\infty} h(\tau) x(-t-\tau) d\tau \\
& = y(-t) \\
\end{align}
$$</span></p>
<p>hence we conclude that <span class="math-container">$h(-t) \star x(-t) = w(t) = y(-t) $</span>. As stated before, you must always consult to the (explicit) integral definition to decide on the correct functions used in the convolution operator.</p>
| 434
|
convolution
|
What are three different techniques to convolve a signal
|
https://dsp.stackexchange.com/questions/63829/what-are-three-different-techniques-to-convolve-a-signal
|
<p>I have studied convolution in time domain and frequency domain. But doesn't know the 3rd method and technique </p>
| 435
|
|
convolution
|
Why does linear convolution with itself converges to Gaussian, but not circular?
|
https://dsp.stackexchange.com/questions/83945/why-does-linear-convolution-with-itself-converges-to-gaussian-but-not-circular
|
<p>If you have a discrete signal, say</p>
<blockquote>
<p>[1,0,0,1]</p>
</blockquote>
<p>and you convolve it linearly with itself, you will quickly get a Gaussian distribution.</p>
<p>Why does it not hold for circular convolution?</p>
|
<p>Convolving <span class="math-container">$x = [1,0,0,1]$</span> with itself repeatedly will not generate a "Gaussian distribution".</p>
<p><span class="math-container">\begin{align}
x \star x &= [1,0,0,2,0,0,1]\\
x \star x \star x &= [1,0,0,3,0,0,3,0,0,1]
\end{align}</span></p>
<p>and so on.</p>
<p>Convolving <span class="math-container">$y = [1,1]$</span> with itself will, eventually, generate something that looks like the Gaussian shape.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = [1,1]
N = 20
xx = x.copy()
for idx in np.arange(N):
xx = np.convolve(xx,x)
plt.plot(xx)
</code></pre>
<p><a href="https://i.sstatic.net/ixcaB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ixcaB.png" alt="Convolving 20 times" /></a></p>
<p>The support (length) of the output of all those convolutions grows and grows with each new convolution.</p>
<p>When circular convolution is done, the support cannot grow to be longer than the chosen length of the convolution (in my code below, the length of the FFT used to implement the convolution).</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def my_convolve(x,y,Nfft):
X = np.fft.fft(x,Nfft)
Y = np.fft.fft(y,Nfft)
Z = X*Y
return np.fft.ifft(Z,Nfft)
x = [1,1]
N = 20
xx = x.copy()
for idx in np.arange(N):
xx = np.real(my_convolve(xx,x, 16))
plt.plot(xx)
</code></pre>
<p><a href="https://i.sstatic.net/xP8Np.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xP8Np.png" alt="Circularly convolved shape" /></a></p>
<p>As a result, the "tails" (ends) of the output will overlap due to the circularity.</p>
| 436
|
convolution
|
Is Periodic Convolution Same as Circular Convolution?
|
https://dsp.stackexchange.com/questions/38450/is-periodic-convolution-same-as-circular-convolution
|
<p>I know exactly what is periodic convolution. But i don't know if circular convolution means to be the same thing!</p>
|
<p>You did not give specific details on how you define periodic or circular. Assuming standard definitions, the answer is <strong>yes</strong>.</p>
<p>Standard infinite-support data is treated with standard convolution. When the support is finite, the "outer unknown samples" require assumptions. </p>
<p>Zero-valued outside samples is a common assertion. Another option, often used with Fourier or wavelet transforms, consists in arranging finite-support data on a ring or torus, the first sample following the last one. </p>
<p>It is often termed periodic, cyclic or circular, with a similar meaning.</p>
| 437
|
convolution
|
A question about time-shifted convolution
|
https://dsp.stackexchange.com/questions/87358/a-question-about-time-shifted-convolution
|
<p>I have 2 time-shifted signals,<span class="math-container">$f(t-t_0)$</span> and <span class="math-container">$g(t-t_0)$</span>. Assuming that <span class="math-container">$y=f(t)*g(t)$</span>, which is the convolution of the signals.</p>
<p>By definition, I have <span class="math-container">$f(t-t_0)*g(t-t_0)=\int_{-\infty}^{+\infty}{f(\tau-t_0)g(t-t_0-\tau)d\tau}$</span></p>
<p>Substitute <span class="math-container">$\tau-t_0$</span> with <span class="math-container">$t_1$</span> then I have</p>
<p><span class="math-container">$f(t-t_0)*g(t-t_0)=\int_{-\infty}^{+\infty}{f(t-2t_0-t_1)dt_1}$</span> which gives <span class="math-container">$f(t-t_0)*g(t-t_0)=y(t-2t_0)$</span></p>
<p>But when I considered substituting the independent variable by <span class="math-container">$T=t-t_0$</span>, I have</p>
<p><span class="math-container">$f(t-t_0)*g(t-t_0)=f(T)*g(T)=y(T)=y(t-t_0)$</span>, which is different from the result above.</p>
<p>Can anyone point out where my mistake is? Thanks in advance.</p>
|
<p>The mistake lies in the use of bad notation for convolution:</p>
<p><span class="math-container">$$y(t)=f(t)*g(t)\qquad \text{(bad!)}\tag{1}$$</span></p>
<p>The correct way to write convolution is</p>
<p><span class="math-container">$$y(t)=(f*g)(t)\tag{2}$$</span></p>
<p>As long as you don't play around with the variable <span class="math-container">$t$</span> everything is fine, so sometimes the sloppy use of notation isn't problematic, but as soon as you shift signals or modify <span class="math-container">$t$</span> in some other way, the notation in <span class="math-container">$(1)$</span> will get you into trouble, as you've seen yourself.</p>
<p>The correct way to write down what you did would be to define new functions for the shifted versions of the original functions:</p>
<p><span class="math-container">$$\tilde{f}(t)=f(t-t_0),\qquad \tilde{g}(t)=g(t-t_0)$$</span></p>
<p>and write the convolution as <span class="math-container">$(\tilde{f}*\tilde{g})(t)$</span>.</p>
| 438
|
convolution
|
Simplification of convolution
|
https://dsp.stackexchange.com/questions/87361/simplification-of-convolution
|
<p>Let's assume that there are two convolutions:</p>
<p><span class="math-container">$y_1 = (h_2[n]\cdot x[n])*(\bar{h_1}[n]\cdot\bar{x}[n])$</span></p>
<p><span class="math-container">$y_2 = (h_3[n]\cdot x[n])*(\bar{h_2}[n]\cdot\bar{x}[n])$</span></p>
<p>where "<span class="math-container">$\bar{x}$</span>" is complex conjugate, <span class="math-container">$\cdot$</span> is element-by-element multiplication, and <span class="math-container">$*$</span> is linear convolution, that is <span class="math-container">$(f*g)[n]=\sum_mf[m]g[n-m]$</span>.</p>
<p>I am looking to simplify <span class="math-container">$y = y_1+y_2$</span>.</p>
<p>It would not be difficult to show that <span class="math-container">$$y\neq ((h_2[n]+h_3[n])\cdot x[n])*((\bar{h_1}[n]+\bar{h_2}[n])\cdot\bar{x}[n])$$</span>
But is there any other way to algebraically simplify <span class="math-container">$y$</span>?</p>
<p>This can also be seen as the sum of two products <span class="math-container">$Y_1$</span> and <span class="math-container">$Y_2$</span>, (the FFTs), where in each FFT there are two convolutions:</p>
<p><span class="math-container">$Y_1 = (H_2*X)\cdot(\tilde{H_1}*\tilde{X})$</span></p>
<p><span class="math-container">$Y_2 = (H_3*X)\cdot(\tilde{H_2}*\tilde{X})$</span></p>
|
<p>Here is the simplest approach I could think of:</p>
<p><span class="math-container">$$x_2[n]= h_2[n]x[n]$$</span></p>
<p><span class="math-container">$$\bar{x_2}[n] = \bar{h_2}[n]\bar{x}[n]$$</span></p>
<p>Thus we eliminate one product as we can get <span class="math-container">$\bar{x_2}[n]$</span> from <span class="math-container">$x_2[n]$</span>. This leaves two more products before going to the frequency domain:</p>
<p><span class="math-container">$$\bar{x}_1[n]= \bar{h}_1[n]\bar{x}[n]$$</span></p>
<p><span class="math-container">$$x_3[n]=h_3[n]x[n]$$</span></p>
<p>With the three products use the FFT to compute the convolution in the frequency domain (zero pad to twice the length if a linear convolution result is desired):</p>
<p><span class="math-container">$$y_1+y_2 = \text{ifft}(X_2[k]\bar{X_1}[N-k]+X_3[k]\bar{X_2}[N-k])$$</span></p>
<p>So in summary I simplified the operations by using the same results whereever possible if already computed, and doing all convolution operations as products in the frequency domain. Given <span class="math-container">$x_1[n]$</span>, <span class="math-container">$x_2[n]$</span> and <span class="math-container">$x_3[n]$</span> are completely independent given no relationship between complex <span class="math-container">$h_1[n]$</span>, <span class="math-container">$h_2[n]$</span> and <span class="math-container">$h_3[n]$</span>, I do not see the opportunity for any further algebraic simplifications.</p>
| 439
|
convolution
|
Implemenation of Length N Moving average filter using block covolution algorithm(overlap and save algorithm)
|
https://dsp.stackexchange.com/questions/68515/implemenation-of-length-n-moving-average-filter-using-block-covolution-algorithm
|
<p>I want to implement a Length N moving average filter using block convolution algorithm(overlap save algorithm)</p>
<p>I am copying block convolution algorithm(overlap save algorithm) from Proakis, DSP using MATLAB 3rd Ed</p>
<pre><code>function [y] = ovrlpsav(x,h,N)
% Overlap-Save method of block convolution
% ----------------------------------------
% [y] = ovrlpsav(x,h,N)
% y = output sequence
% x = input sequence
% h = impulse response
% N = block length
%
Lenx = length(x); M = length(h);
M1 = M-1; L = N-M1;
h = [h zeros(1,N-M)];
%
x = [zeros(1,M1), x, zeros(1,N-1)]; % preappend (M-1) zeros
K = floor((Lenx+M1-1)/(L)); % # of blocks
Y = zeros(K+1,N);
% convolution with succesive blocks
for k=0:K
xk = x(k*L+1:k*L+N);
Y(k+1,:) = cconv(xk,h,N);
end
Y = Y(:,M:N)'; % discard the first (M-1) samples
y = (Y(:))'; % assemble output
</code></pre>
<p>Now using above function i am trying to implement a Length N moving average filter.
My script(m-file) is below</p>
<pre><code>clc
clear all
close all
n=1:5
signal=n*pi;
subplot(221)
stem(n,signal);
xlabel('n');ylabel('amplitude');title('original signal')
noise=rand(1,5);
subplot(222)
stem(n,noise);
xlabel('n');ylabel('amplitude');title('noise signal');title('noise ')
p=ovrlpsav(signal,noise,6),
subplot(223)
stem(p);xlabel('n');ylabel('amplitude');title('corrupted signal')
m=5;
xu=0;
for i=1:m
x=signal+noise;
xu=xu+x;
end
xu=xu/m;
subplot(224)
stem(n,xu);xlabel('n');ylabel('amplitude');title('filtered signal')
</code></pre>
<p>I am getting a filtered signal similar to input signal as shown by above code,<strong>but am i following proper approach?</strong></p>
|
<p>in psudo pyhon code</p>
<pre><code>const tail = 5
'inside loop where you feed it values
list.push(entry)
list.rotate.left(1)
if len(list)>tail list.pop()
avg = sum(lst) / len(lst)
print avg
</code></pre>
| 440
|
convolution
|
Question about convolution of shifted function-simple
|
https://dsp.stackexchange.com/questions/43339/question-about-convolution-of-shifted-function-simple
|
<p>I memorized the convolution formula, all good. I started doing examples and in one of them I had to calculate the convolution of a shifted function with another one.</p>
<p>I know the property $$H(x)=f(x)\ast g(x)\\H(x-k)=f(x)\ast g(x-k)=f(x-k)*g(x)$$</p>
<p>So I had no trouble finding the result but I had trouble writing the integral. Which of the following is correct?</p>
<p>$$\int_{-\infty}^\infty f(τ)g(t-k-τ)dτ\\\int_{-\infty}^\infty f(τ)g(t+k-τ)dτ$$</p>
|
<p>Considering a continuous-time LTI system with impulse response <span class="math-container">$h(t)$</span>, its output to any valid input <span class="math-container">$x(t)$</span> is described by a <strong>convolution integral</strong>:</p>
<p><span class="math-container">$$y(t) = x(t) \star h(t) = \int_{-\infty}^{\infty} x(\tau) h(t-\tau) d\tau \tag{1.a}$$</span> and equivalently
<span class="math-container">$$y(t) = h(t) \star x(t) = \int_{-\infty}^{\infty} h(\tau) x(t-\tau) d\tau \tag{1.b}$$</span><br />
(convolution operator is <strong>commutative</strong> .)</p>
<p>Hence, a shifted output <span class="math-container">$y(t-k)$</span> is shown as
<span class="math-container">$$y(t-k) = \int_{-\infty}^{\infty} x(\tau) h(t-k-\tau) d\tau \tag{2.a}$$</span> or equivalently as</p>
<p><span class="math-container">$$y(t-k) =\int_{-\infty}^{\infty} h(\tau) x(t-k-\tau) d\tau \tag{2.b}$$</span></p>
<p>Eqs.2a & 2b are <span class="math-container">$x(t) \star h(t-k)$</span> and <span class="math-container">$ h(t) \star x(t-k)$</span> respectively.</p>
<p>Note that, Eqs.1 uses the convention that the second operand of convolution is shifted inside the integral, which is also the case in Eqs.2. However, one can also write the shifted output <span class="math-container">$y(t-k)$</span> as <span class="math-container">$x(t-k) \star h(t)$</span> or <span class="math-container">$h(t-k) \star x(t)$</span> which yield the integrals:</p>
<p><span class="math-container">$$y(t-k) = \int_{-\infty}^{\infty} x(\tau-k) h(t-\tau) d\tau \tag{3.a}$$</span> and
<span class="math-container">$$y(t-k) = \int_{-\infty}^{\infty} h(\tau-k) x(t-\tau) d\tau \tag{3.b}$$</span><br />
respectively.</p>
| 441
|
convolution
|
Fusing convolutions
|
https://dsp.stackexchange.com/questions/89456/fusing-convolutions
|
<p>I am attempting to combine two consecutive 2D convolutions into a single 2D convolution. Ideally, a convolution operation is a linear transformation, so it should be possible to merge two linear transformations into one.</p>
<p>In practice, I have performed all the necessary calculations to derive the fused kernel. I began with two 3x3 2D convolutions, each with a stride of 1 and padding of 1, and attempted to merge them into a single 5x5 convolution with a stride of 1 and padding of 2.</p>
<p>The results indicated that these operations are equivalent for most of the image except for the borders. The pixels processed at the border of the input are different, and it's impossible to set the padding differently to achieve equivalence. So, what is happening? I also attempted to train a 5x5 Conv2D model to mimic the operations and obtained results similar to my algebraic derivation of the kernel, but they still aren't equivalent.</p>
<p>What could be the explanation for this?</p>
<p>2 linear layer con be fused to 1 linear layer, but 2 conv cannot always be fused to 1 conv.
how is it possible, and what the name of this thing?</p>
| 442
|
|
convolution
|
Convolution of delta function in discrete time with parameter
|
https://dsp.stackexchange.com/questions/46363/convolution-of-delta-function-in-discrete-time-with-parameter
|
<p>If I have a delta function of the following type</p>
<p>$$ n\delta(5n-20) * 3^{5n}u[n] $$ How can I calculate this convolution?</p>
<p>I have thought of the property $$ \delta[n-n_0] *x[n] = x[n-n_0] $$</p>
<p>Then if I say $$ k=5n $$ and $$ t[n]= 3^{5n}u[n] $$</p>
<p>It should be that the convolution is equal to :</p>
<p>$$ nt[\frac{n}{5}-20] $$ and replacing t $$ n 3^{5(\frac{n}{5}-20)}u[\frac{n}{5}-20]$$</p>
<p>Is this right? Or am I wrong somewhere?</p>
|
<p>You can use the following argumentation to find the result. The discrete time unit-sample function $\delta[n]$ has the following property for integer $M$:
$$ \delta[Mn] = \delta[n] $$ and more generally you can conlcude that for integer $M$ and $d$ we have
$$ \delta[M(n-d)] = \delta[n-d] $$</p>
<p>Therefore you can replace $\delta[5n-20] = \delta[5(n-4)]$ with $\delta[n-4]$ and proceed as usual to find the result of the convolution as:</p>
<p>$$y[n] = n\delta(5n-20) \star 3^{5n}u[n] = n\delta(n-4) \star 3^{5n}u[n] = 4 \cdot 3^{5(n-4)}u[n-4] $$</p>
<p>where in this last line we have used the <strong>sifting</strong> and <strong>shifting</strong> properties of the impulse function.</p>
| 443
|
convolution
|
Sample time in convolution
|
https://dsp.stackexchange.com/questions/63927/sample-time-in-convolution
|
<p>I have a signal sampled at a period of 0.01 and a channel impulse response sampled at 0.1250. I convolved the signal with the channel impulse response.</p>
<ul>
<li>It is right to perform the convolution between two signals with diffrent sample times?</li>
<li>If it is, how can I calculate the sample time of the convolution result?</li>
</ul>
|
<blockquote>
<p>It is right to perform the convolution between two signals with diffrent sample times?</p>
</blockquote>
<p>No, aside from a very few special use cases (if you have to ask, not yours), you need to resample either of them so that both are at the same sample rate.</p>
<p>Also notice that sampling a channel impulse response at 8 Hz only gives you information about these 8 Hz (or 4 Hz, depending on whether this is real direct sampling or a sampled equivalent complex baseband representation of the channel). You're trying to apply it to a 100 Hz (or 200 Hz, depending...) wide channel: that won't give you anything useful, even <strong>if</strong> you resampled. Your channel impulse response measurement simply isn't sufficient.</p>
| 444
|
convolution
|
Why is circular convolution used in DSP? Why not linear convolution?
|
https://dsp.stackexchange.com/questions/35155/why-is-circular-convolution-used-in-dsp-why-not-linear-convolution
|
<ol>
<li><p>Why are we using circular convolution in DSP? </p></li>
<li><p>What's the main solid reason for the use of it in digital
processing? </p></li>
<li><p>Why does the concept of circular convolution come more often than
linear convolution?</p></li>
</ol>
|
<p>Given a discrete-time <strong>LTI</strong> system with impulse response <span class="math-container">$h[n]$</span>, one can compute its response to any input <span class="math-container">$x[n]$</span> by a <strong>convolution</strong> sum: <span class="math-container">$$y[n] = x[n] \star h[n] = \sum_{k=-\infty}^{\infty} {h[k]x[n-k]} \tag{1.a}$$</span></p>
<p>It's a <strong>linear convolution</strong> (aperiodic convolution) for <span class="math-container">$ -\infty < n < \infty$</span> between aperiodic sequences of finite or infinite duration.</p>
<p>A <strong>circular convolution</strong> is by definition between two periodic sequences, <span class="math-container">$\tilde{x}[n]$</span> and <span class="math-container">$\tilde{y}[n]$</span>, of period <span class="math-container">$N$</span>:</p>
<p><span class="math-container">$$ \tilde{y}[n] = \sum_{k=0}^{N-1}\tilde{x}[(k)_N]\tilde{h}[(n-k)_N] \tag{1.b}$$</span> and computed over a single period, and <span class="math-container">$()_N$</span> is the modulus operator.</p>
<p>Hand computation of a linear convolution can be performed either in the time-domain, based on Eq.1a, or in the frequency-domain via the following <strong>DTFT</strong> (discrete-time Fourier transform) property:
<span class="math-container">$$y[n] = x[n]\star h[n] \implies Y(e^{j\omega}) = X(e^{j\omega}) H(e^{j\omega}) \tag{2}$$</span></p>
<p>Where the DTFT is:
<span class="math-container">$$X(e^{j \omega}) = \sum_{n=-\infty}^{\infty} x[n] e^{-j\omega n} \tag{3}$$</span></p>
<p>Note that DTFT is naturally related with the linear convolution, because it deals in general with aperiodic (as well as periodic) sequences of finite or infinite duration.</p>
<p>To numerically evaluate the convolution sum with a computer, one can still use the time-domain Eq.1a, BUT not the frequency-domain Eq.2; since the DTFT output variable <span class="math-container">$\omega$</span> is a real <strong>continuous</strong> number; not representable with a digital computer (we shall ignore the computer symbolic algebra systems for obvious reasons).</p>
<p>To achieve Eq.2, we shall use finite-length discrete varible transform that can be represented with a digital computer, which is the <strong>DFT</strong> (discrete Fourier transform) <span class="math-container">$X[k]$</span> defined as the uniform samples of the DTFT:
<span class="math-container">$$ X[k] = X(e^{j\omega})|_{\omega = \frac {2\pi k}{N} } \tag{4}$$</span></p>
<p>where <span class="math-container">$k=0,1,...,N-1$</span> implies <span class="math-container">$0 \le \omega < 2\pi$</span>, and <span class="math-container">$N$</span> is the length of the <span class="math-container">$N$</span>-point DFT <span class="math-container">$X[k]$</span> which is actually a <strong>periodic</strong> sequence, inherited from periodicty of DTFT, but we only consider its first, base, period.</p>
<p>Then for periodic sequences, Eq.2 is restated using the DFT as follows:</p>
<p><span class="math-container">$$x[n] (\star) h[n] \leftrightarrow X[k]H[k] \tag{5.a}$$</span></p>
<p><span class="math-container">$$ \tilde{x}[n] (\star) \tilde{h}[n] \leftrightarrow X[k]H[k] \tag{5.b}$$</span></p>
<p>where, all sequences in Eq.5 are periodic with period <span class="math-container">$N$</span>. Since DFT sequences are inherently periodic, then their convolution, in Eq.5b, will also be periodic (circular).</p>
<p>Therefore, implementing a convolution in the frequency-domain is based on Eq.5b, with the following relation between a linear and a circular convolution.</p>
<p>An aperiodic sequence <span class="math-container">$y[n]$</span>, obtained with a <strong>linear</strong> convolution between aperiodic sequences, is implied by the inverse DTFT
<span class="math-container">$$y[n] =\mathcal{I-DTFT} \{ X(e^{j\omega})H(e^{j\omega}) \} \tag{6.a}$$</span>
<span class="math-container">$$y[n] =\frac{1}{2\pi} \int_{-\pi}^{\pi}X(e^{j\omega})H(e^{j\omega}) e^{j \omega n} d\omega \tag{6.b}$$</span></p>
<p>whereas, a periodic sequence <span class="math-container">$\tilde{y}[n]$</span>, obtained with a <strong>circular</strong> convolution between periodic sequences, is implied by the inverse DFT <span class="math-container">$$\tilde{y}[n] = \mathcal{I-DFT} \{ X[k]H[k] \} \tag{7.a}$$</span>
<span class="math-container">$$\tilde{y}[n] = \frac{1}{N} \sum_{k=0}^{N-1} X[k]H[k] e^{ \frac{2 \pi}{N} k n} \tag{7.b}$$</span></p>
<p>The fundamental relation between the linear convolution output <span class="math-container">$y[n]$</span> and the circular convolution output <span class="math-container">$\tilde{y}[n]$</span> is that <span class="math-container">$\tilde{y}[n]$</span> is the <strong>periodic extension</strong> of <span class="math-container">$y[n]$</span>:
<span class="math-container">$$ \tilde{y}[n] = \sum_{r=-\infty}^{\infty} y[n-rN] \tag{8}$$</span></p>
<p>As a consequence of Eq.8, the match between the desired linear convolution output and the computed circular convolution output depends critically on the DFT length <span class="math-container">$N$</span> which must be larger than the length of the desired sequence to avoid any time-domain <strong>aliasing</strong> implied by Eq.8, when <span class="math-container">$N < L_y$</span>.</p>
<p>Eventually, for computing linear convolution <span class="math-container">$y[n]$</span> of length <span class="math-container">$L_y$</span> between sequences <span class="math-container">$x[n]$</span> and <span class="math-container">$h[n]$</span> of lengths <span class="math-container">$L_x$</span> and <span class="math-container">$L_h$</span>, we multiply their <span class="math-container">$N$</span>-point DFTs <span class="math-container">$X[k]$</span> and <span class="math-container">$H[k]$</span>, and compute its inverse <span class="math-container">$N$</span>-point DFT, which yields <span class="math-container">$N$</span>-point circular convolution <span class="math-container">$\tilde{y}[n]$</span> between <span class="math-container">$N$</span>-point periodic extensions <span class="math-container">$\tilde{x}[n]$</span> and <span class="math-container">$\tilde{h}[n]$</span> of <span class="math-container">$x[n]$</span> and <span class="math-container">$h[n]$</span>. Extracting the first period of <span class="math-container">$\tilde{y}[n]$</span>, yields the desired <span class="math-container">$y[n]$</span>.</p>
<p>Finally, implementation of a linear convolution using DFT has the following steps:</p>
<ol start="0">
<li><p>Given inputs <span class="math-container">$x[n]$</span> and <span class="math-container">$h[n]$</span>of lengths <span class="math-container">$L_x$</span> and <span class="math-container">$L_h$</span></p>
</li>
<li><p>Choose DFT length <span class="math-container">$N$</span> according to: <span class="math-container">$$N \geq L_x + L_h -1$$</span> to guarantee alias-free reconstruction of <span class="math-container">$y[n]$</span>.</p>
</li>
<li><p>Compute <span class="math-container">$N$</span>-point DFTs <span class="math-container">$X[k]$</span> and <span class="math-container">$H[k]$</span> of <span class="math-container">$x[n]$</span> and <span class="math-container">$h[n]$</span>.</p>
</li>
<li><p>Compute <span class="math-container">$Y[k] = X[k]H[k]$</span></p>
</li>
<li><p>Compute <span class="math-container">$N$</span>-point inverse DFT of <span class="math-container">$Y[k]$</span> to produce the output <span class="math-container">$y[n]$</span> in the first <span class="math-container">$L_y$</span> samples.</p>
</li>
</ol>
<p>Note that each DFT above is actually implemented with an FFT for reason of efficiency.</p>
| 445
|
convolution
|
Convolution in Continious time
|
https://dsp.stackexchange.com/questions/13309/convolution-in-continious-time
|
<p><img src="https://i.sstatic.net/Md1Ow.png" alt="enter image description here"></p>
<p>Dear all,</p>
<p>Please find attached image,
I am not able to understand why is the underlined term is to be included.
Shouldn't it be simply the exponential term?</p>
|
<p>The $u(t-2)$ term needs to be included because the upper limit of the definite integral is $t-2$, not $\infty$:</p>
<p>\begin{align*}
h(t) &= e^{-(t-2)}u(t-2)\\
y(t) &= x(t) * h(t)= \int_{-\infty}^{\infty} x(t')h(t-t') dt'\\
&= \int_{-\infty}^{\infty} x(t')e^{-(t-t'-2)}u(t-t'-2) dt'\\
u(t-t'-2)&= \begin{cases} 1 & t'<t-2 \\ 0 & \text{otherwise} \end{cases}\\
y(t) &= \int_{-\infty}^{t-2} x(t')e^{-(t-t'-2)}(1) dt' + \int_{t-2}^{\infty} x(t')e^{-(t-t'-2)} (0) dt'\\
&= \int_{-\infty}^{t-2} x(t')e^{-(t-t'-2)} dt'.
\end{align*}</p>
| 446
|
convolution
|
Is there a convolution mistake in my method?
|
https://dsp.stackexchange.com/questions/19934/is-there-a-convolution-mistake-in-my-method
|
<p>I have the input signal $x(t)$ <img src="https://i.sstatic.net/XxdCL.png" alt="enter image description here"></p>
<p>And impulse response $h(t)=20 e^{-1000t} u(t)$ in which u(t) is the unit step function.</p>
<p>When I try a convolution, I thought the solutions would be something like:</p>
<p>$
\begin{array}{ll}
\int\limits_{0}^{t} 2\cdot 20e^{-1000(t-\tau)}d\tau = \frac{1}{25}(1-e^{-1000t}),& 0 \le t < 2 \\
\int\limits_{0}^{2} 2\cdot 20e^{-1000(t-\tau)}d\tau - \int\limits_{2}^{t} 20e^{-1000(t-\tau)}d\tau = \frac{3}{50}e^{2000-1000t}-\frac{e^{-1000t}}{25}-\frac{1}{50},& 2\le t < 3 \\
\int\limits_{0}^{2} 2\cdot 20e^{-1000(t-\tau)}d\tau - \int\limits_{2}^{3} 20e^{-1000(t-\tau)}d\tau = \frac{3}{50}e^{2000-1000t}-\frac{1}{50}e^{3000-1000t}-\frac{e^{-1000t}}{25},& t \geq 3 \\
\end{array}
$</p>
<p>But this did not look like the answer in the solution manual, so did I do something wrong here?</p>
<p>Answer in solution manual: <img src="https://i.sstatic.net/22Osi.png" alt="enter image description here"></p>
<p>Alternative solution in solution manual: <img src="https://i.sstatic.net/vPg5z.png" alt="enter image description here"></p>
|
<p>You just multiplied the two functions and integrated them but you didn't <em>convolve</em> them. You must compute</p>
<p>$$y(t)=\int_{-\infty}^{\infty}x(\tau)h(t-\tau)d\tau=2\int_0^2h(t-\tau)d\tau-\int_2^3h(t-\tau)d\tau$$</p>
<p>Alternatively, you can compute the step response</p>
<p>$$a(t)=\int_{-\infty}^th(\tau)d\tau$$</p>
<p>Because $x(t)=2u(t)-3u(t-2)+u(t-3)$ the output can be written in terms of the step response:</p>
<p>$$y(t)=2a(t)-3a(t-2)+a(t-3)$$</p>
| 447
|
convolution
|
Convolution - Massive spike in noise section
|
https://dsp.stackexchange.com/questions/31642/convolution-massive-spike-in-noise-section
|
<p>I am computing the correlation with MATLAB using the convolution fonction with a signal flipped before feeded into the function. Here are the results.</p>
<p><a href="https://i.sstatic.net/sTydl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sTydl.jpg" alt="enter image description here"></a></p>
<p>The first plot is the actual signal, the second one is the signal I am trying to find in the first signal and the third one is the convolution results. On the first plot, we can see at $n=3000$ and more that the signal we are searching for is there and it is confirm by the spike in the third plot at $n = 3370$. However, there is massive spike in the beginning of the convolution (plot #3/ $n < 3000$) even if the beginning of the first signal is noise (plot #1/ $n < 3000$) . Why is there such spike in the convolution plot.</p>
|
<p>If your searched-for signal is partially similar to itself at some offset, that can also show up when correlated against an actual input signal containing it. </p>
<p>Test your searched-for signal by autocorrelation against itself to check for this problem. If you want a sharp convolution peak against test input, you need a searched-for signal that has very low self-similarity at any lag. </p>
| 448
|
convolution
|
Identifying origin in convolution table
|
https://dsp.stackexchange.com/questions/44401/identifying-origin-in-convolution-table
|
<p>I am taking the convolution of $x(n)=\left\{2,1,-1,-2,3 \right\}$ with $n=0$ at the third position with $h(n)=\left\{1,2,0,3 \right\}$ with $n=0$ at the second position. The answer is $y(n)=\left\{2,5,1,-10, -10, -3, 6, -9 \right\}$ with the $n=0$ at the fourth position. I studied convolution more than a decade ago and I used a table to do the convolution sum as follows:</p>
<p><a href="https://i.sstatic.net/qVZgf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qVZgf.jpg" alt="enter image description here"></a></p>
<p>There used to be a simple rule to obtain the location of the $n=0$ position of the output $y(n)$ from the table itself or perhaps from $x(n)$ and $y(n)$. But alas I cannot remember it. Grateful for any suggestions.</p>
|
<p>Given a signal <span class="math-container">$x[n]$</span>, guaranteed to be zero outside of the domain <span class="math-container">$N_1 \leq n \leq N_2$</span>, and the signal <span class="math-container">$h[n]$</span> being zero outside of <span class="math-container">$M_1 \leq n \leq M_2$</span>, then their convolution <span class="math-container">$y[n] = x[n] \star h[n]$</span> will be guaranteed to be zero outside of <span class="math-container">$N_1+M_1 \leq n \leq N_2 + M_2$</span>.</p>
<p>Applying this to your case yields the following:
For the signal <span class="math-container">$x[n]$</span> with <span class="math-container">$n=0$</span> at third position the nonzero range is <span class="math-container">$-2 \leq n \leq 2$</span> and for <span class="math-container">$h[n]$</span> with <span class="math-container">$n=0$</span> at the second position the nonzero range is <span class="math-container">$-1 \leq n \leq 2$</span> which therefore yields the nonzero range of their convolution <span class="math-container">$y[n]$</span> is:
<span class="math-container">$$ -3 \leq n \leq 4 $$</span></p>
<p>which means that the <span class="math-container">$n=0$</span> for <span class="math-container">$y[n]$</span> is at the fourth position.</p>
| 449
|
convolution
|
Difficulties while understanding Convolution
|
https://dsp.stackexchange.com/questions/49578/difficulties-while-understanding-convolution
|
<p>I have just started convolution sum. I am able to do Mathematical convolution sum. But I am unable to make a figure of it in my mind. So I tried to go through the graphical convolution.
Below is the picture
<a href="https://i.sstatic.net/Zwgjn.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zwgjn.jpg" alt=""></a></p>
<p>I understand decomposed part but I don't understand that how each decomposed part is generating x[ 0 ]h[ n ],x[ 1 ]h[ n-1 ]....?</p>
<p>I guess a little help makes me to understand the Convolution sum.
Thanks</p>
|
<p>The convolution sum is
\begin{equation}
y[n] = \sum_k x[k]h[n-k]
\end{equation}
If you take the term corresponding to $k=0$, the contribution is $x[0]h[n]$. Similarly the contribution of $k=1$ is $x[1]h[n-1]$. Here, $h[n]$ is delayed by $n=1$. Similarly you can see how each term of the sum contributes to overall sum. The sum is taken over values $k$ where $x[k] \neq 0$ because only those terms contribute to the sum.</p>
| 450
|
convolution
|
Convolution - change of variables
|
https://dsp.stackexchange.com/questions/49906/convolution-change-of-variables
|
<p>I am struggling trying to understand the 3rd line of the expression below. The second line can be simply visualized by letting the kernel size be 3 and $i=2$ for eg. </p>
<p>Then
$ y_2 = \sum_a x_{2+a-1}k_a = x_2k_1 + x_3k_2 + x_4k_3$</p>
<p>In the 3rd line, there is a change of variable letting $u=i+a-1$. What happens then ? How do I compute $y_2$ now ? </p>
<p>$ y_2 = \sum_u x_uk_{u-i+1} = ?$</p>
<p><a href="https://i.sstatic.net/ckubH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ckubH.png" alt="enter image description here"></a></p>
| 451
|
|
convolution
|
Using Matlab's conv() function
|
https://dsp.stackexchange.com/questions/52891/using-matlabs-conv-function
|
<p><a href="http://matlabtricks.com/post-3/the-basics-of-convolution" rel="nofollow noreferrer">Here</a> is an example of convolution given:</p>
<p><a href="https://i.sstatic.net/xWkbq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xWkbq.png" alt="enter image description here"></a></p>
<p>I have two questions here:</p>
<ol>
<li><p>Why is the vector <strong>𝑥</strong> padded with two 0s on each side? As, the length of kernel <strong>ℎ</strong> is 3. If <strong>𝑥</strong> is padded with one 0 on each side, the middle element of convolution output would be within the range of the length of <strong>𝑥</strong>, why not one 0 on each side? </p></li>
<li><p>Explain the following output to me: </p>
<pre><code> >> x = [1, 2, 1, 3];
>> h = [2, 0, 1];
>> y = conv(x, h, 'valid')
y =
3 8
>>
</code></pre>
<p>What is <a href="https://www.mathworks.com/help/matlab/ref/conv.html" rel="nofollow noreferrer"><strong><code>valid</code></strong></a> doing here in the context of the previously shown mathematics on vectors <strong>𝑥</strong> and <strong>ℎ</strong>?</p></li>
</ol>
|
<p>For your second question, <em>valid</em> means the range of output where there is full overlap between the convolving signals. </p>
<p>Note that for two causal signals, <span class="math-container">$x[n]$</span> of length <span class="math-container">$L_x$</span> and <span class="math-container">$h[n]$</span> of length <span class="math-container">$L_h$</span> the convolution output <span class="math-container">$y[n]$</span> will be of length <span class="math-container">$L_y = L_x + L_h -1$</span>. </p>
<p>However among those output <span class="math-container">$y[n]$</span> samples, the first <span class="math-container">$L_d = \min\{L_x,L_h\} -1$</span> and the last <span class="math-container">$L_d= \min\{L_x,L_h\} -1$</span> samples will be computed with partial overlaps between <span class="math-container">$x[n]$</span> and <span class="math-container">$y[n]$</span> (whicih are the edge samples) but the remaining portion (that's in the center) will be be computed by full overlap with the shorter sequence being fully inside the longer sequence.</p>
<p>In your case; <span class="math-container">$L_x = 4$</span> and <span class="math-container">$L_h=3$</span> then the first and last <span class="math-container">$2$</span> samples will be discarded and the center <span class="math-container">$2$</span> samples will be returned.</p>
| 452
|
convolution
|
Graphical DT Convolution
|
https://dsp.stackexchange.com/questions/56392/graphical-dt-convolution
|
<p>Attached is the question 1 part B, solution and my attempt. My attempt seems to be not correct. can someone comment?<a href="https://i.sstatic.net/AUfW9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AUfW9.png" alt="Question"></a></p>
<p><a href="https://i.sstatic.net/SFjWF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SFjWF.png" alt="Solution"></a></p>
<p><a href="https://i.sstatic.net/JziMr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JziMr.jpg" alt="Attempt"></a></p>
|
<p>To elaborate the point of my comment:</p>
<p>What you did there is a kind of circular convolution, where you thought of the impulse response <span class="math-container">$h(t)$</span> as periodic, i.e. the system triggered by <span class="math-container">$\delta(0)$</span> would oscillate until infinity. The periodicity is represented by your "wrapping around" the impulse response. The output you calculated is actually just one period of the output signal.</p>
<p>This is different from the linear convolution that the task formulation obviously expects to be applied here. With the linear convolution, the impulse response is thought to be the graph representing <span class="math-container">$h(n)$</span> padded by zeros from <span class="math-container">$-\infty$</span> to <span class="math-container">$0$</span> and from <span class="math-container">$3$</span> to <span class="math-container">$\infty$</span>. Here, if the system is triggered by <span class="math-container">$\delta(0)$</span>, you would get <span class="math-container">$h(n)$</span> and then all zeros to infinity. </p>
| 453
|
convolution
|
Is there a process for deriving special cases of convolution?
|
https://dsp.stackexchange.com/questions/63741/is-there-a-process-for-deriving-special-cases-of-convolution
|
<p>For my signals and systems homework, I was met with the following problem:</p>
<p>Consider a CT system with impulse response given by</p>
<p><span class="math-container">$$h(t) = e^{-|t|}$$</span></p>
<p>and a step input <span class="math-container">$x(t) = u(t)$</span>. Using the definition of convolution, determine an expression for the output <span class="math-container">$y(t) = x(t)∗h(t)$</span>.</p>
<p>I approached this problem using the standard method taught in my class. I set up the following integral:</p>
<p><span class="math-container">$$\int_{-\infty}^{+\infty}h(\tau)x(t-\tau)d\tau$$</span></p>
<p>While evaluating the integral, I got stuck at the end as the integral approached infinity. I decided to rearrange <span class="math-container">$h(t)$</span> by expressing it in a different form. The form I then got it in was:</p>
<p><span class="math-container">$$h(t) = e^{-t}u(t) + e^tu(-t)$$</span></p>
<p>Which then allowed me to match it to the following convolution on a table given with my homework.<a href="https://i.sstatic.net/4gkNP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4gkNP.png" alt="enter image description here"></a></p>
<p>From doing this problem it seems to me that there are special cases for convolutions. Ones that which the standard method of evaluation doesn't work. </p>
<p>Is there a process to derive these special cases? If so can an example be given? And why wouldn't the standard process work for this example? </p>
|
<p>Well, indeed there are special cases for convolutions but yours is quite straightforward. You have to consider two cases: <span class="math-container">$t<0$</span> and <span class="math-container">$t>0$</span>. </p>
<p>If you sketch the convolution process for these two cases, by sliding your time shifted and time reversed input on the same axis as your impulse response, you will end up with the following:</p>
<p><a href="https://i.sstatic.net/7qhqY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7qhqY.png" alt="enter image description here"></a></p>
<p>Can you handle the integrals shown?</p>
| 454
|
convolution
|
Confusions regarding convolution?
|
https://dsp.stackexchange.com/questions/63971/confusions-regarding-convolution
|
<p>As far as i have studied and understood, convolution is the process by which we can get/determine output of LTI systems
While reading one web link about convolution, i came across certain notation ,that i couldn't understand as shown highlighted in attached photo</p>
<p><strong>1)Are these notations referring to delay/shift in input and impulse response?</strong></p>
<p><strong>2)Also it mentions that **Periodic or circular convolution is also called as fast convolution</strong> as shown highlighted in last line of 2nd photo attached. Is it idea correct?**</p>
<p><a href="https://i.sstatic.net/7DRUz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7DRUz.png" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/qjHyk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qjHyk.png" alt="enter image description here"></a></p>
|
<blockquote>
<p>1)Are these notations referring to delay/shift in input and impulse response?</p>
</blockquote>
<p>Yes</p>
<blockquote>
<p>2)Also it mentions that **Periodic or circular convolution is also called as fast convolution</p>
</blockquote>
<p>That's a bit of a misrepresentation. For a non-trivial length of signal and/or impulse response the fastest way to implement convolution is to multiply in the frequency domain. That does indeed implement circular convolution. That's typically NOT what you actually want. If you want to leverage frequency domain multiplication for linear convolution, you need to add the right amount of zero padding and/or overlap handling to your algorithm. Read up on "Overlap Add" or "Overlap Save" algorithm.</p>
| 455
|
convolution
|
Properties of convolution same as LTI systems?
|
https://dsp.stackexchange.com/questions/66460/properties-of-convolution-same-as-lti-systems
|
<p>I have attached a snapshot from "signal processing first "
Where author says that properties of LTI systems are same as properties of convolution as shown in attached snapshot </p>
<p>Are these properties exactly same for both cases,convolution and LTI systems?and we don't need to apply any scaling etc?</p>
<p><a href="https://i.sstatic.net/iQtAv.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iQtAv.jpg" alt="enter image description here"></a></p>
|
<p>The author simply means that the response of an LTI system <span class="math-container">$h[n]$</span> can be found by convolving the input sequence <span class="math-container">$x[n]$</span> with the <span class="math-container">$h[n]$</span>. And in doing that all properties of convolution will hold true for LTI systems. Like, frequency response of an LTI system will simply become <span class="math-container">$X(e^{j\omega}).H(e^{j\omega})$</span>, because convolution in time means product in <span class="math-container">$\omega$</span> domain.</p>
<p>You can look through all the properties on convolution operation at : <a href="https://en.wikipedia.org/wiki/Convolution#Properties" rel="nofollow noreferrer">ConvolutionProperties</a>, and be sure that all of them will be held by an LTI System as well.</p>
<p>For example : Associativity : <span class="math-container">$$(x*h_1)*h_2 = x*(h_1*h_2)$$</span>
In context with LTI systems, it means that if <span class="math-container">$x[n]$</span> is the input to two cascaded LTI systems <span class="math-container">$h_1[n]$</span> and <span class="math-container">$h_2[n]$</span>, then we can actually, find out the equivalent LTI system by convolving the 2 cascaded systems as <span class="math-container">$(h_1*h_2)[n]$</span> and then find the output of this equivalent system by applying input <span class="math-container">$x[n]$</span> to it.</p>
<p>Similarly, you can see the relation between all of the convolution properties in context of an LTI system.</p>
| 456
|
convolution
|
How to index output of convolution product
|
https://dsp.stackexchange.com/questions/68530/how-to-index-output-of-convolution-product
|
<p>Unfortunately indexing the output of convolution product confuses me. Suppose <span class="math-container">$x[n]$</span> is of length <span class="math-container">$15$</span> (i.e. <span class="math-container">$0 <= n <= 14$</span>) and <span class="math-container">$y[n]$</span> has the length of <span class="math-container">$9$</span> (<span class="math-container">$0 <= n <= 8$</span>). if <span class="math-container">$z[n]$</span> be the output of convolution product of <span class="math-container">$x[n]$</span> and <span class="math-container">$y[n]$</span>, that is <span class="math-container">$$z[n] = (x * y)[n] = \sum_{k=-\infty}^{+\infty} x[k]y[n-k]$$</span>
As a result, <span class="math-container">$z[n]$</span> would has the length of <span class="math-container">$23$</span>, right?
Now the question is what is the range of <span class="math-container">$n$</span> for <span class="math-container">$z[n]$</span>? Is it <span class="math-container">$0 <= n <= 22$</span> or <span class="math-container">$-4 <= n <= 18$</span>?<br />
From formula for convolution I think the latter is true. But the problem arises when I work with seismic signals.<br />
Suppose <span class="math-container">$acc[t_i]$</span> is seismic signal of Kobe earthquake where <span class="math-container">$0 <= t_i <= 41.99 sec$</span> with sampling period of <span class="math-container">$T_s = 0.01 sec$</span> (length of signal <span class="math-container">$4200$</span>) as depicted below:</p>
<p><a href="https://i.sstatic.net/0ScLU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0ScLU.png" alt="Seismic signal, Kobe Japan, 1/16/1995" /></a>
Now it is desired to denoise <span class="math-container">$acc[t_i]$</span> using moving average filter with impulse response <span class="math-container">$h[t_i]$</span>:<br />
<span class="math-container">$$ h[t_i] =
\begin{cases}
\frac{1}{30}, & \text{if $0 <= t_i <= 0.29$} \\
0, & \text{if $0.3 <= t_i <= 41.99$}
\end{cases}$$</span>
where <span class="math-container">$T_s = 0.01 sec$</span>. After applying <span class="math-container">$h[t_i]$</span> to <span class="math-container">$acc[t_i]$</span> by convolution, the output is the sequence <span class="math-container">$ACC[t_i]$</span> of length 8399, as shown in following figure.<br />
<a href="https://i.sstatic.net/KlSY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KlSY8.png" alt="Denoised seismic signal, Kobe Japan, 1/16/1995" /></a>
Obviously the results of filtering process is the interval marked by red bullets. However according to convolution formula, I think I should label time axis between approximately <span class="math-container">$-21sec$</span> to <span class="math-container">$63sec$</span>.<br />
Is that right or something is missing?</p>
|
<p>If <span class="math-container">$x[n]$</span> and <span class="math-container">$y[n]$</span> are both causal and starting at index <span class="math-container">$0$</span>, then the result of convolution will also be causal and it will start at index <span class="math-container">$0$</span>. Just plug in <span class="math-container">$n=-4$</span> in the expression for <span class="math-container">$z[n]$</span>, you will find that it will be <span class="math-container">$0$</span>.
<span class="math-container">$$z[n] = \sum^{\infty}_{k=-\infty}x[k]y[n-k]$$</span>
First non-zero term in above expression is at <span class="math-container">$n=0$</span>. Because for <span class="math-container">$k<0$</span>, <span class="math-container">$x[k]$</span> will be <span class="math-container">$0$</span>, you can re-write the above expression as:
<span class="math-container">$$z[n] = \sum^{\infty}_{k=0}x[k]y[n-k]$$</span>
In the above expression, put <span class="math-container">$n<0$</span>, you will see that <span class="math-container">$y[n-k]$</span> will always be 0.</p>
<p>There will be a transient delay of <span class="math-container">$N-1$</span> samples, when you are using Moving-Average FIR Filter of length <span class="math-container">$N$</span>, not an advance. The delay is due to the fact that you need <span class="math-container">$N-1$</span> previous samples of the input before you can produce an average of <span class="math-container">$N$</span> samples. You cannot have an output before you feed input to a causal FIR filter. Expecting an output at <span class="math-container">$-21sec$</span>, is expecting to look into future which is not correct here.</p>
<p>The figures you have posted is showing that you have 8399 samples of output starting at <span class="math-container">$n=0$</span>. As you can see, approximately at <span class="math-container">$5sec$</span>, the seismic activity starts, Moving average just smoothens-out the original input signal with a transience of <span class="math-container">$0.3sec$</span>.</p>
| 457
|
convolution
|
Prove that the twisted Convolution of $f$ and $g$ lies in $L^2(C^n)$?
|
https://dsp.stackexchange.com/questions/76627/prove-that-the-twisted-convolution-of-f-and-g-lies-in-l2cn
|
<p>Edit,this is the <strong>exact phrasing</strong> of my question</p>
<blockquote>
<p>Let <span class="math-container">$\lambda\in\mathbb{R^n}$</span>.Prove that</p>
<p><span class="math-container">$f\star_{\lambda}g\in L^2(\mathbb{C^n})$</span> for all functions <span class="math-container">$f$</span> and <span class="math-container">$g$</span>
in <span class="math-container">$L^2(\mathbb{C^n})$</span>.What Happens when <span class="math-container">$\lambda=0?$</span></p>
</blockquote>
<p>I been reading the book Harmonic Analysis on the Heisenberg Group
By Sundaram Thangavelu and on Page 16,it explains that the twisted convolution turns <span class="math-container">$L^1(C^n)$</span> into a non-communicative Banach algebra
<a href="https://i.sstatic.net/wTMoZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wTMoZ.png" alt="enter image description here" /></a></p>
<p>**I define a twisted convolution as follows,*The definition of a twisted convolution follows
<a href="https://i.sstatic.net/pJX20.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pJX20.jpg" alt="Definition of Twisted Convolution" /></a>
Where <span class="math-container">$[z,w]$</span> is the Symplectic form given by
<a href="https://i.sstatic.net/0mJMM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0mJMM.jpg" alt="Definition of the Symplectic form of [z,w]" /></a></p>
<p>I want to prove that the twisted convolution for <span class="math-container">$f$</span> and <span class="math-container">$g$</span> lies in <span class="math-container">$L^2(C^n)$</span><strong>How to you prove this fact?</strong>
I note that when <span class="math-container">$\lambda=0$</span>,they do not lie in <span class="math-container">$L^2(C^n)$</span>.
Here is my attempt at the question when <span class="math-container">$\lambda=0$</span> does not lie in <span class="math-container">$L^2(C^n)$</span>.
My Attempt when <span class="math-container">$\lambda=0$</span>,I want to note that <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are Swartz functions
<a href="https://i.sstatic.net/cGi03.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cGi03.jpg" alt="My attempt" /></a>
<a href="https://i.sstatic.net/pJX20.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pJX20.jpg" alt="My attempt at proving that the normal convolution(when <span class="math-container">$\lambda=0$</span>) of <span class="math-container">$f$</span> and <span class="math-container">$g$</span> does not lie in <span class="math-container">$L^2(C^n)$</span>" /></a></p>
<p>How do I approach this more general case?
I been reading numerous references to help me solve this problem such as</p>
<ol>
<li>Pseudo-Differential Operators, Generalized Functions and Asymptotics</li>
<li><a href="https://terrytao.wordpress.com/2019/07/26/twisted-convolution-and-the-sensitivity-conjecture/" rel="nofollow noreferrer">https://terrytao.wordpress.com/2019/07/26/twisted-convolution-and-the-sensitivity-conjecture/</a>
and numerous other sources and still could not figure it out</li>
</ol>
<p>What specific function would you use to prove this fact? I need some assistance here,and what specific function can you use to prove this fact?</p>
<p>Or could you reference me to another book that has a chapter about the identities and full proofs related to twisted convolutions</p>
<p>I been stuck on this problem for 2 days and need help.</p>
| 458
|
|
convolution
|
Convolution output signal transient correction
|
https://dsp.stackexchange.com/questions/89536/convolution-output-signal-transient-correction
|
<p>I'm just starting out in signal analysis and I've come across this effect. When I use convolution of a sinusoid with any other, either triangular pulse, rectangular pulse or decreasing exponential (regardless of whether they are causal) a transient segment appears in the output signal. Is this effect of the convolution? How can it be solved? If I add initial conditions or conditions for t<0 will this effect be corrected?
<a href="https://i.sstatic.net/HzZUx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HzZUx.png" alt="enter image description here" /></a></p>
|
<blockquote>
<p>Is this effect of the convolution?</p>
</blockquote>
<p>Yes.</p>
<p>If you convolve two finite signals of length <span class="math-container">$N$</span> and <span class="math-container">$K$</span>, the resulting signal will have a length of <span class="math-container">$M = N + K + 1$</span>. Let's take a look at the convolution sum for an FIR filter</p>
<p><span class="math-container">$$y[n] = \sum_{k = 0}^{K-1} h[k]\cdot x[n-k]$$</span></p>
<p>We can see that for <span class="math-container">$n = 0$</span> the sum goes back to all the way to input sample <span class="math-container">$x[-K+1]$</span>. Since we have declared <span class="math-container">$x[n]$</span> to be finite on <span class="math-container">$[0,N-1]$</span> most convolution algorithms just assume zeros, i.e. <span class="math-container">$x[n] = 0, n < 0$</span>. You can think about this as "pre-pending" <span class="math-container">$K-1$</span> zeros to the input signals. That creates the transient.</p>
<blockquote>
<p>How can it be solved? If I add initial conditions or conditions for t<0 will this effect be corrected?</p>
</blockquote>
<p>Yes. Accurate initial conditions will fix this. Along the same lines: you can just take a longer chunk of input signal and then simply discard the initial transient.</p>
<p>However, depending on the application, that's not always possible and in some cases you just have to live with it. There is no "one size fits all" solution.</p>
| 459
|
convolution
|
What is convolution of two sine waves (tones)?
|
https://dsp.stackexchange.com/questions/18480/what-is-convolution-of-two-sine-waves-tones
|
<p>Convolution of two sine waves (or tones as called in audio) is theoretically not defined as the integral is infinite. Taking finite duration windowed sine waves and doing there convolution computationally always contains a fundamental frequency equal to that of the lower frequency sine wave. I am not getting an intuitive understanding of this. Anybody who can share more understanding in this?</p>
|
<p>Convolution in the time domain is equivalent to multiplication in the frequency domain. </p>
<p>If you window two sinusoids in the time domain to get finite length waveforms, and the two sinusoids are exactly integer periodic in the window width, then the DFT will be impulses. If the frequencies are different, the impulses in the two DFT will be disjoint, and the multiplication of the two spectrum will result in zero output. </p>
<p>If one or both of the two sinusoids are not exactly integer periodic in your window width, then the FT will result in a Sinc function. Then the convolution will be equivalent to multiplying a Sinc function against an impulse or another Sinc function, which will produce a non-zero result.</p>
| 460
|
convolution
|
Convolution in frequency domain, what is the output bandwidth
|
https://dsp.stackexchange.com/questions/43828/convolution-in-frequency-domain-what-is-the-output-bandwidth
|
<p>If two signals do convolution in frequency domain, would the output width/BW be related to the width/BW input signals?
Would it be addition or any other?</p>
|
<p>For signals <span class="math-container">$x(t)$</span> and <span class="math-container">$y(t)$</span> having domains <span class="math-container">$ t_1 < t < t_2 $</span> and <span class="math-container">$ t_3 < t < t_4 $</span>, respectively, their convolution in the time-domain <span class="math-container">$y(t)=x(t) \star h(t)$</span> has the following domain: <span class="math-container">$ t_1 + t_3 < t < t_2 + t_4$</span>.</p>
<p>The corresponding result for convolution in the frequency-domain is obtained by replacing <span class="math-container">$t$</span> with <span class="math-container">$\omega$</span> . Given two frequency-domain signals with domain of supports (for positive frequencies) such that <span class="math-container">$X(\omega) \neq 0$</span> for <span class="math-container">$\omega_1 < \omega < \omega_2$</span> and <span class="math-container">$H(\omega) \neq 0$</span> for <span class="math-container">$\omega_3 < \omega < \omega_4$</span>, then their convolution <span class="math-container">$Y(\omega) = X(\omega) \star H(\omega)$</span> will have the domain of support as <span class="math-container">$Y(\omega) \neq 0$</span> for <span class="math-container">$$ \omega_1 + \omega_3 < \omega < \omega_2 + \omega_4 $$</span></p>
<p>Therefore the bandwidth of <span class="math-container">$Y(\omega)$</span> is related to the bandwidths of <span class="math-container">$X(\omega)$</span> and <span class="math-container">$H(\omega)$</span> as:</p>
<p><span class="math-container">$$ BW_Y = (\omega_2 + \omega_4) - (\omega_1 + \omega_3) $$</span>
<span class="math-container">$$ BW_Y = (\omega_4 - \omega_3) + (\omega_2 - \omega_1) $$</span>
<span class="math-container">$$ BW_Y = BW_H + BW_X $$</span></p>
<p>This assumes that the domains were <strong>compact</strong> and <strong>finite</strong>. If they are either noncompact or non-finite the result should be modified accordingly.</p>
| 461
|
convolution
|
A better explaination for 'there's no flipping of the impulse response'
|
https://dsp.stackexchange.com/questions/8530/a-better-explaination-for-theres-no-flipping-of-the-impulse-response
|
<p>I showed the question about flipping the impulse response to some math teachers and electrical engineers at an university. They read Dilip's answer and thought that it was hard to understand what Dilip meant and it was simply not true...</p>
<p>Could someone confirm that it really isn't true what Dilip Sarwate wrote. And explain it in a way that is easier to read.</p>
<p>See: <a href="https://dsp.stackexchange.com/questions/5992/flipping-the-impulse-response-in-convolution/6355#6355">Flipping the impulse response in convolution</a></p>
<blockquote>
<p>There is no "flipping" of the impulse response by a linear (time-invariant) system. The output of a linear time-invariant system is the sum of scaled and time-delayed versions of the impulse response, not the "flipped" impulse response.</p>
</blockquote>
|
<p>Flipping the impulse response is really just a matter of perspective. The LTI system doesn't care about perspective. In any case, here is a graphic showing a system that takes an input of color weighted impulses.</p>
<p><img src="https://i.sstatic.net/Z2f2T.jpg" alt="enter image description here"></p>
| 462
|
convolution
|
Is there any meaning to the extra-points introduced by a convolution product?
|
https://dsp.stackexchange.com/questions/94494/is-there-any-meaning-to-the-extra-points-introduced-by-a-convolution-product
|
<p>Mathematically speaking, when convolving a <span class="math-container">$N$</span>-points signal with a <span class="math-container">$M$</span>-points signal, the resulting signal contains <span class="math-container">$N+M-1$</span> points.</p>
<p>In practice, the <span class="math-container">$N$</span>-points signal may be an audio signal to which we want to remove some frequency components, so we design a filter whose impulse response fits our desires.</p>
<p>Then, we convolve the <span class="math-container">$N$</span>-points audio signal with the <span class="math-container">$M$</span>-points impulse response of the filter, resulting in an output signal containing <span class="math-container">$N+M-1$</span> points, and because <span class="math-container">$N+M-1>N$</span>, the filtered audio signal is longer than the raw audio signal.</p>
<p>In this case, is there any real meaning to these extra-points ? I guess that we just discard them, and keep only the <span class="math-container">$N$</span> first points for the output signal, is that true ?</p>
|
<p>Sure, the samples at the end contain the convolution of the last samples of your signal with the filter. Imagine your filter being a bit boring, having the (odd-length) impulse response</p>
<p><span class="math-container">$$(\underbrace{0,0, \ldots, 0}_{(M-1)/2\text{ zeros}}, 1,\underbrace{0,0, \ldots, 0}_{(M-1)/2\text{ zeros}}).$$</span></p>
<p>Clearly, that filter will just delay your signal by <span class="math-container">$(M-1)/2$</span> samples. Can't cut that off if you want to preserve the original signal!</p>
<p>FIR filter as applied in DSP are causal, meaning that by principle, even if they passed all frequencies, the <em>group delay</em> delay experienced by each frequency in the signal is positive (for linear phase, i.e., symmetrical, filters, it's exactly half the filter length). So, strictly speaking, you can't cut off anything of the end, because that tail contains signal effects!</p>
<p>So, you get <span class="math-container">$N+M-1$</span> samples, but only want to keep <span class="math-container">$N$</span>. Which one to cut off then, if not the end ones? The ones at the beginning? Not really better, because the first sample coming out of the filter (if that filter is not the non-sense toy example above) will already contain influence of the first signal sample.</p>
<p>There's no "right" answer here. Throwing half <span class="math-container">$M-1$</span> from both the beginning and end is somewhat of a golden middle ground. In practice, it doesn't matter <em>that</em> much in audio – ideally, there's a slow fade in and fade out, otherwise the unfiltered signal already started and ended very abruptly.</p>
<p>But I see where this becomes a problem: imagine you've got a music album, say, some progressive rock thing, that is nominally split into CD tracks containing songs of 10 minutes each, but the transitions between them are smooth – if you put the CD in your CD player (I know, who's got these anymore?) you don't notice where "Pseudo Silk Kimono" ends and "Kayleigh" starts. However, because you're compressing these tracks individually to, say, OPUS encoded files, your codec is letting a very relaxed low-pass filter pass over the audio to get rid of inaudible whine before compression.</p>
<p>Now, because the encoder only sees the tracks individually, not as whole album, at the beginning of the second track, it has no information on the last <span class="math-container">$M-1$</span> samples of the previous track; it needs to invent them just to give the filter a starting state.</p>
<p>Typically, you just internally repeat the first <span class="math-container">$M-1$</span> samples before the actual first sample that you filter – that way, the short-time Fourier transform over that duration is consistent. And that works pretty well for the human ear – on playback of the full album from filtered compressed files, you'll typically not notice that the encoder couldn't know how the previous track ended.</p>
| 463
|
convolution
|
This isn't a typical convolution, what is it then?
|
https://dsp.stackexchange.com/questions/94669/this-isnt-a-typical-convolution-what-is-it-then
|
<p>In the numerous PlayStation emulators there are out there, this small piece of code is used to resample the input to feed to the reverb, which only works at 22050 Hz:</p>
<p><a href="https://github.com/libretro-mirrors/mednafen-git/blob/master/src/psx/spu_reverb.inc#L69" rel="nofollow noreferrer">https://github.com/libretro-mirrors/mednafen-git/blob/master/src/psx/spu_reverb.inc#L69</a></p>
<pre><code>// Zeroes optimized out; middle removed too(it's 16384)
static const int16 ResampTable[20] =
{
-1, 2, -10, 35, -103, 266, -616, 1332, -2960, 10246, 10246, -2960, 1332, -616, 266, -103, 35, -10, 2, -1,
};
static INLINE int32 Reverb4422(const int16 *src)
{
int32 out = 0; // 32-bits is adequate(it won't overflow)
for(unsigned i = 0; i < 20; i++)
out += ResampTable[i] * src[i * 2];
// Middle non-zero
out += 0x4000 * src[19];
out >>= 15;
clamp(&out, -32768, 32767);
return(out);
}
</code></pre>
<p>If you look closely, the taps aren't applied to the samples in the usual way, rather, only one tap per sample.</p>
<p>Can you explain what's the logic behind this and why does it work?</p>
|
<p>I assume that it is a 2x downsampler (from 44100 to 22050) implemented using a halfband lowpass filter (~50% of the taps are zero).</p>
<p>The code appears to produce only a single output sample (must be called repeatedly with the *src pointer shifted appropriately each time). Thus it is not what I would think of as a "convolution" but rather a (specialized) inner-product</p>
<p>If you want to analyze the filter I think you can do this:</p>
<pre><code>h = [-1, 0, 2, 0, -10, 0, 35, 0, -103, 0, 266, 0, -616, 0, 1332, 0, -2960, 0, 10246, 16384];
h = [h(1:end), h(end-1:-1:1)];
figure,
freqz(h,1,1024,44100)
</code></pre>
<p><a href="https://i.sstatic.net/AeHNc18J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AeHNc18J.png" alt="enter image description here" /></a></p>
| 464
|
convolution
|
What is the physical meaning of the convolution of two signals?
|
https://dsp.stackexchange.com/questions/4723/what-is-the-physical-meaning-of-the-convolution-of-two-signals
|
<p>If we convolve 2 signals we get a third signal. What does this third signal represent in relation to the input signals?</p>
|
<p>There's not particularly any "physical" meaning to the convolution operation. The main use of convolution in engineering is in describing the output of a <a href="http://en.wikipedia.org/wiki/LTI_system_theory">linear, time-invariant (LTI)</a> system. The input-output behavior of an LTI system can be characterized via its <a href="http://en.wikipedia.org/wiki/Impulse_response">impulse response</a>, and the output of an LTI system for any input signal $x(t)$ can be expressed as the convolution of the input signal with the system's impulse response. </p>
<p>Namely, if the signal $x(t)$ is applied to an LTI system with impulse response $h(t)$, then the output signal is:</p>
<p>$$
y(t) = x(t) * h(t) = \int_{-\infty}^{\infty}x(\tau)h(t - \tau)d\tau
$$</p>
<p>Like I said, there's not much of a physical interpretation, but you can think of a convolution qualitatively as "smearing" the energy present in $x(t)$ out in time in some way, dependent upon the shape of the impulse response $h(t)$. At an engineering level (rigorous mathematicians wouldn't approve), you can get some insight by looking more closely at the structure of the integrand itself. You can think of the output $y(t)$ as the sum of an infinite number of copies of the impulse response, each shifted by a slightly different time delay ($\tau$) and scaled according to the value of the input signal at the value of $t$ that corresponds to the delay: $x(\tau)$. </p>
<p>This sort of interpretation is similar to taking discrete-time convolution (discussed in Atul Ingle's answer) to a limit of an infinitesimally-short sample period, which again isn't fully mathematically sound, but makes for a decently intuitive way to visualize the action for a continuous-time system.</p>
| 465
|
convolution
|
Why the delta at the end of the approximation?
|
https://dsp.stackexchange.com/questions/16410/why-the-delta-at-the-end-of-the-approximation
|
<p>The equation to approximate an input signal with a unit impulse in Continuous Time, is shown below, before we take the limit $\hat{x}(t)=\frac{lim}{\Delta\rightarrow0}\sum^{\infty}_{-\infty}x(k\Delta)\delta_\Delta(t-k\Delta)\Delta$ <-- why is there a final $\Delta$ multiplying the $\delta_\Delta(t-k\Delta)$?</p>
|
<p>Here the signal is in continuous time domain. We can approximate any signal with weighted integral of unit impulse. As the signal is in continuous time domain <strong>integration</strong> is used instead of <strong>summation</strong>.</p>
<p>The actual equation is ,
$$x(t)=\int_{-\infty}^{\infty}{x(t_0)\delta(t-t_0)dt}$$</p>
<p>In your equation ${lim}_{\Delta\rightarrow0}\sum^{\infty}_{-\infty}$ stands for integration so <strong>$\Delta$</strong> is required at the end, which stands for <strong>dt</strong> in integral.</p>
<p>Hope you clear that.</p>
| 466
|
convolution
|
Unit step function present in convolution result
|
https://dsp.stackexchange.com/questions/28671/unit-step-function-present-in-convolution-result
|
<p>We are currently learning about convolution in my signals and systems class, and one of our textbook problems is to compute the convolution of $x(t) = e^{-\alpha t} u(t)$ and $y(t) = e^{-\beta t} u(t)$. After working out the integral, I obtained the following result</p>
<p>$$
x(t) * h(t) = \frac{e^{-\alpha t} - e^{-\beta t}}{\beta - \alpha}.
$$</p>
<p>The solutions manual for my textbook has the same answer, except their result is multiplied by $u(t)$. I am unsure of where this is coming from, since I know that I computed the integral correctly. I used the unit step functions to simplify the bounds on the integral, so I do not see how they are still around in the final answer.</p>
<p>Can anyone explain what is going on here?</p>
|
<p>A step by step derivation of this convolution would start with the following:</p>
<p>$$
\begin{align}
f(t)
&= \int_{-\infty}^\infty x(\tau) h(t-\tau) \,d\tau \\
&= \int_{-\infty}^\infty e^{-\alpha\tau} u(\tau) e^{-\beta(t-\tau)} u(t-\tau) \,d\tau
\end{align}
$$
And this is probably where you probably went on and tried to simplified the bounds on the integral with:
$$
\begin{align}
f(t) &= \int_0^t e^{-\alpha\tau} e^{-\beta(t-\tau)} \, d\tau \\
\end{align}
$$
forgetting that this is only true if $t \geq 0$.
Indeed for $t < 0$ the product $u(\tau) u(t-\tau)$ is 0 for all values of $\tau$, and the result of the integral is thus 0.</p>
<p>So, the correct expression would be:
$$
\begin{align}
f(t)
&= u(t) \int_0^t e^{-\alpha\tau} e^{-\beta(t-\tau)} \, d\tau \\
&= \cdots \\
&= u(t) \frac{e^{-\alpha t} - e^{-\beta t}}{\beta-\alpha}
\end{align}
$$</p>
| 467
|
convolution
|
Does the output of this system have a closed form expression?
|
https://dsp.stackexchange.com/questions/28831/does-the-output-of-this-system-have-a-closed-form-expression
|
<p>\begin{align}
x(n)&=\left(-1\right)^n u(n)\\
h(n)&= u(n-1)+u(n-2)\\
y(n)&= x(n)*h(n)\\
y(n)&= \sum_{k=-\infty}^\infty \left(-1\right)^k u(k)\left[u(n-k-1)+u(n-k-2)\right]
\end{align}
Graphically it looks like the output is $u(n-1)$ but I can't do the math to get it into the closed form. </p>
<p>Edit: Ok I pretty much worked it through I'm just iffy on justifying $n \ge 1$</p>
<p>$u(k)$ sets the bottom limit of the summation to $0$ and $u(n-k-1)$ sets the upper limit to $n-1$. $n$ has to be at least $1$ for the summation to make sense and the partial sums = $1$.</p>
|
<p>You figured out the solution by yourself, so this is just about how to show it formally. First, split the sum into two sums:</p>
<p>$$y[n]=\sum_{k=-\infty}^{\infty}(-1)^ku[k]u[n-1-k]+\sum_{k=-\infty}^{\infty}(-1)^ku[k]u[n-2-k]\tag{1}$$</p>
<p>The unit steps give you upper and lower summation indices, but note that the resulting sums are only valid if the upper summation index is not smaller than the lower one:</p>
<p>$$y[n]=u[n-1]\sum_{k=0}^{n-1}(-1)^k+u[n-2]\sum_{k=0}^{n-2}(-1)^k\tag{2}$$</p>
<p>Now note that you can increase the upper summation index by any even number without changing the value of the sum. The second term on the right-hand side of $(2)$ can thus be rewritten as</p>
<p>$$u[n-2]\sum_{k=0}^{n-2}(-1)^k=u[n-1]\sum_{k=0}^{n}(-1)^k\tag{3}$$</p>
<p>In $(3)$ the argument of the unit step can be changed to $n-1$ because the value of the sum equals zero for $n=1$. Using $(3)$, Eq. $(2)$ can be rewritten as</p>
<p>$$\begin{align}y[n]&=u[n-1]\left(\sum_{k=0}^{n-1}(-1)^k+\sum_{k=0}^{n}(-1)^k\right)\\&=u[n-1]\left(2\sum_{k=0}^{n-1}(-1)^k+(-1)^n\right)\\&=u[n-1]\left((1+(-1)^{n+1})+(-1)^n\right)\\&=u[n-1]\tag{4}\end{align}$$</p>
| 468
|
convolution
|
What does the convolution mean, what is the convolution philosophy
|
https://dsp.stackexchange.com/questions/62233/what-does-the-convolution-mean-what-is-the-convolution-philosophy
|
<p>I would like to know why the convolution is necessary.</p>
<p>that is, who said that multiplying numbers with others and then adding them would tell us something?</p>
<p>If you could give me analogies without mathematics, it would be a great help for non-mathematicians.</p>
|
<p>I'm somewhat in the same situation as you @Jony. But I'm working currently on a project involving audio processing, and I had to read a lot about convolution and related topics in the last few days.</p>
<p>I'm not sure I gained an <em>intuitive</em> understanding of convolution--esp. convolution vs. correlation-- but, to quote <a href="https://dsp.stackexchange.com/a/18492/46389">another answer</a>, "Convolution in the time domain is equivalent to multiplication in the frequency domain." So it becomes obvious it may have a usage in filtering applications.</p>
<p><a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow noreferrer">Wikipedia</a> hs also a couple of nice animation that shows you how the convolution of two functions is related to the area of their intersection:</p>
<p><a href="https://i.sstatic.net/fQNAV.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fQNAV.gif" alt="enter image description here"></a>
<a href="https://i.sstatic.net/UeGSy.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UeGSy.gif" alt="enter image description here"></a></p>
<p>Depending on your field of interest, this may give you some applications idea. As the last word, let me repeat I'm very new to this topic, so take this answer with a grain of salt--and I would gratefully stand to be corrected by more experienced users</p>
| 469
|
convolution
|
Why is "only the term for k = n is nonzero" relevant?
|
https://dsp.stackexchange.com/questions/96007/why-is-only-the-term-for-k-n-is-nonzero-relevant
|
<p>In the book Applied Digital Signal Processing by Manolakis and Ingle, page 45, section 2.5.1 "Properties of convolution", the author says:</p>
<p>"If we consider a system with impulse response <span class="math-container">$h[n] = \delta[n]$</span>, direct substitution into the convolution formula gives <span class="math-container">$y[n] = x[n]$</span>"</p>
<p>However, he justifies his claim by saying:
"because only the term for <span class="math-container">$k = n$</span> is nonzero"</p>
<p>I cannot understand how this justification is relevant, because to my understanding, when the impulse response <span class="math-container">$h[n]$</span> is <span class="math-container">$\delta[n]$</span>, then</p>
<p><span class="math-container">$$\underbrace{\sum_{k = -\infty}^{\infty} x[k]h[n-k]}_{y[n]} \tag 1$$</span></p>
<p>becomes
<span class="math-container">$$\underbrace{\sum_{k = -\infty}^{\infty} x[k]\delta[n-k]}_{x[n]} \tag 2$$</span></p>
<p>So why is that justification relevant?</p>
|
<p>Fix the value of <span class="math-container">$n$</span>, say <span class="math-container">$n=2$</span>, and Write out the sum explicitly instead of using mathematical gobblydegook like summation signs <span class="math-container">$\displaystyle \sum$</span>, like so:</p>
<p><span class="math-container">$$\cdots + x[-1]\delta[2-(-1)] + x[0]\delta[2-0] + x[1]\delta[2-1] + x[2]\delta[2-2] + x[3]\delta[2-3] + \cdots$$</span></p>
<p>Then, simplify the arguments of the <span class="math-container">$\delta$</span>'s to get
<span class="math-container">$$\cdots + x[-1]\delta[3)] + x[0]\delta[2] + x[1]\delta[1] + x[2]\delta[0] + x[3]\delta[-1] + x[4]\delta[-2] + \cdots$$</span></p>
<p>Next, apply the definition of <span class="math-container">$\delta[\cdot]$</span> to get</p>
<p><span class="math-container">$$\cdots + x[-1]\cdot 0 + x[0]\cdot 0 + x[1]\cdot 0 + x[2]\cdot 1 + x[3]\cdot 0 + x[4]\cdot 0 + \cdots$$</span></p>
<p>Finally simplify the sum to get <span class="math-container">$x[2]$</span> as the value of the convolution, that is, <span class="math-container">$y[2] = x[2]$</span>. Repeat for <span class="math-container">$n=3, n=4$</span>, etc until you see the pattern: no matter what value of <span class="math-container">$n$</span> you choose, the result <span class="math-container">$y[n]$</span> of the convolution of <span class="math-container">$x$</span> and <span class="math-container">$\delta$</span> is <span class="math-container">$x[n]$</span>. That's what Manolakis and Ingle used in jumping to the conclusion that for all values of <span class="math-container">$n$</span>, <span class="math-container">$y[n]$</span> equals <span class="math-container">$x[n]$</span>.</p>
| 470
|
convolution
|
Convolution for audio, how to control amplitude?
|
https://dsp.stackexchange.com/questions/68748/convolution-for-audio-how-to-control-amplitude
|
<p>I am implementation a convolution reverb method for audios array in Python using <code>scipy.signal.convolve</code>. I choose the fft mode, which essentially a product of the rfft of the two signals then ifft back.</p>
<p>What I am not sure is how to perform the convolution without changing the amplitude of the original signal. My audio sample is normalized to -1. to 1.0. And the impulse response (ir) is not normalized but has a peak value near 1.0.</p>
<p>A direct <code>convolve(signal, ir)</code> will result a new signal that has the amplitude of over -6.0 to 6.0. This of course results in a massive clipping during playback. Of course I can reduce the amplitude of the ir to bring down the level. However, I don't know what is the mathematical relationship here in order to achieve the same amplitude.</p>
<p>Thank you in advance for your advice.</p>
| 471
|
|
convolution
|
Matched filter - do I need to know the signal I am trying to find?
|
https://dsp.stackexchange.com/questions/68876/matched-filter-do-i-need-to-know-the-signal-i-am-trying-to-find
|
<p>I need to identify a matched filter, and so have gone through the mathematics to do this as listed here <a href="https://en.wikipedia.org/wiki/Matched_filter" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Matched_filter</a> which derives the optimal filter <span class="math-container">$h$</span> to apply to a time series <span class="math-container">$x$</span>, where that time series is assumed to comprise a desirable signal <span class="math-container">$s$</span> and some noise <span class="math-container">$v$</span>. This suggests the optimal matched filter is:
<span class="math-container">\begin{equation}
h = \frac{1}{\sqrt{s^{\mathrm{H}}\mathbf{\mathrm{R}}_v^{-1}s}} \mathbf{\mathrm{R}}_v^{-1}s
\end{equation}</span>
and this should be applied to produce the filtered series <span class="math-container">$y$</span> thus:
<span class="math-container">\begin{equation}
y\left(n\right) = \sum_{k=-\infty}^{\infty} h\left(n-k\right)x\left(k\right)
\end{equation}</span>
however, I don't know <span class="math-container">$s$</span>. How, therefore, do I use this method (if at all)?</p>
|
<p>As stated by others, the signal <span class="math-container">$s[n]$</span> has to be known to calculate the corresponding matched filter.</p>
<p>An alternative is to use the power of deep learning to emulate a matched filter response as shown in <a href="https://arxiv.org/abs/1701.00008" rel="nofollow noreferrer">this paper</a>.</p>
| 472
|
convolution
|
PCA with CNN Tensorflow
|
https://dsp.stackexchange.com/questions/69130/pca-with-cnn-tensorflow
|
<p>I need to improve my model of Convolutional Neural Network (CNN). The goal is to recognize facial expression.
I've been using some strategies like dropout for regularization and Adam optimazer, but i can get a good accuracy. Therefore I'm trying to implement PCA.</p>
<p>I use to filters for convolution, max pooling after each one and relu for activation. The highest accuracy i could get was 0.6.</p>
<p>I am working with Tensorflow. How can I do that?</p>
<p>Or there is another way to implemente pre-training to a CNN.</p>
<p>Thanks in Advance</p>
| 473
|
|
convolution
|
Twisted convolution - simplifying multiple integral via substitution
|
https://dsp.stackexchange.com/questions/69561/twisted-convolution-simplifying-multiple-integral-via-substitution
|
<p>I need help with understanding a proof from a paper. Following is the description:</p>
<blockquote>
<p>Let</p>
<p><span class="math-container">$$r_{1}(t) = \int \int h_{1}\left(\tau, \nu\right) e^{j 2\pi \nu (t - \tau)} s\left(t - \tau\right) d\tau d\nu$$</span></p>
<p><span class="math-container">$$r(t) = \int \int h_{2}\left(\tau, \nu\right) e^{j 2\pi \nu (t - \tau)} r_{1}\left(t - \tau\right) d\tau d\nu$$</span></p>
<p>Substituting <span class="math-container">$r_{1}(t)$</span> in <span class="math-container">$r_{2}(t)$</span> we obtain after some algebraic
manipulations,</p>
<p><span class="math-container">$$r(t) = \int \int h\left(\tau, \nu\right) e^{j 2\pi \nu (t - \tau)} s\left(t - \tau\right) d\tau d\nu$$</span></p>
<p>where, <span class="math-container">$f(\tau, \nu)$</span> is given by</p>
<p><span class="math-container">$$h(\tau, \nu) = \int \int h_{2}\left(\tau', \nu' \right) h_{1}\left(\tau - \tau', \nu-\nu' \right) e^{j 2\pi \nu' \left(\tau - \tau' \right)} d\tau' d\nu'$$</span></p>
</blockquote>
<p>I am unable to use appropriate algebraic manipulations to arrive at the final expression for <span class="math-container">$r(t)$</span>. Can anyone help me? The first step is arrived at by substituting <span class="math-container">$r_{1}(t-\tau)$</span> in <span class="math-container">$r_{2}(t)$</span>. But I am unable to proceed further. (Incorporated corrections from <a href="https://dsp.stackexchange.com/a/69562/34817">Tanya Choudhary's answer</a>)</p>
<p><span class="math-container">$$r(t) = \int \int h_2(\tau,\nu)e^{j 2\pi\nu(t-\tau)} \int \int h_1(\tau_1,\nu_1)e^{j 2\pi\nu(t-\tau-\tau_1)}s(t-\tau-\tau_1)d\tau_1 d\nu_1 d\tau d\nu$$</span></p>
<p>Substituting <span class="math-container">$\tau = \tau', \tau_1 = \tau - \tau', \nu = \nu', \nu_1 = \nu - \nu'$</span>,</p>
<p><span class="math-container">$$r(t) = \int \int h_2(\tau',\nu')e^{j 2\pi\nu'(t-\tau')} \int \int h_1(\tau - \tau',\nu - \nu')e^{j 2\pi\nu(t-\tau)}s(t-\tau)d\tau' d\nu' d\tau d\nu$$</span></p>
|
<p>Since these are two different integral equations, during substitution you cannot use the same variables for both the integrals.
On substituting <span class="math-container">$r_1(t-\tau)$</span> to <span class="math-container">$r(t)$</span> we get,
<span class="math-container">$$r(t) = \int \int h_2(\tau,\nu)e^{j 2\pi\nu(t-\tau)} \int \int h_1(\tau_1,\nu_1)e^{j 2\pi\nu(t-\tau-\tau_1)}s(t-\tau-\tau_1)d\tau_1 d\nu_1 d\tau d\nu$$</span>
Now substitute <span class="math-container">$\tau = \tau', \tau_1 = \tau - \tau', \nu = \nu', \nu_1 = \nu - \nu'$</span></p>
| 474
|
convolution
|
Linear convolution in the DFT domain
|
https://dsp.stackexchange.com/questions/71142/linear-convolution-in-the-dft-domain
|
<p>Let's say I have 2 sequences a and b in the time domain. Both are length N. A and B are the DFT of a and b. If I do a circular convolution of A and B in freq domain (A o B), then the IDFT of the result will be a x b in the time domain. Correct?
Now what happens if I do linear convolution of A and B in the freq domain, i.e. A*B? The results will have a length of 2N-1. What does the IDFT of the result mean in the time domain?</p>
<p>Notation:</p>
<p>-x is multiplication</p>
<p>-* is linear convolution</p>
<p>-o is circular convolution</p>
| 475
|
|
convolution
|
Finding the convolution $a^n u[n]$ with $b^n u[n]$
|
https://dsp.stackexchange.com/questions/71154/finding-the-convolution-an-un-with-bn-un
|
<p>Hi I'm trying to solve the problem when studying for an upcoming test. The given solution is <span class="math-container">$$y[n]= \frac{1}{b-a}(b^{n+1}-a^{n+1}) \quad \text{for } n\ge0 \ .$$</span> However, I'm not sure how to reach this point. We have to also verify using the z-transform. Any help would be appreciated so much.</p>
|
<p>Just do it the hard way by writing out the individual formulas for the general convolution sum
<span class="math-container">\begin{align}
y[n] &= \sum_{k=-\infty}^\infty a[k]b[n-k]\\
&= \sum_{k=0}^\infty a[k]b[n-k] &\scriptstyle{\text{because }a[k]=0~\text{whenever } k < 0,}\\
&= \sum_{k=0}^n a[k]b[n-k] &\scriptstyle{\text{because }b[n-k]=0~\text{whenever } k > n,}
\end{align}</span>
which gives
<span class="math-container">\begin{array}{rclcl}
y[0] &= &a[0]b[0] &= &1\\
&&&= &\displaystyle\frac{b^1-a^1}{b-a}\\
y[1] &= &a[0]b[1] + a[1]b[0] &= &b+a\\
&&&= &\displaystyle\frac{b^2-a^2}{b-a}\\
y[2] &= &a[0]b[2] + a[1]b[1] + a[2]b[0] &= &b^2 + ba +a^2\
&&&= &\displaystyle\frac{b^3-a^3}{b-a}\\
&&&= &\displaystyle\frac{b^3-a^3}{b-a}\\
\vdots &= &\ddots~~~\scriptstyle{\text{Hey, Ma! I think I see a pattern developing here}}\\
\\
y[n] &= &a[0]b[n] + a[1]b[n-1] + \cdots + a[n]b[0]
&= &b^n + b^{n-1}a + \cdots + ba^{n-1} + a^n\\
&&&= &\displaystyle\frac{b^{n+1}-a^{n+1}}{b-a}
\end{array}</span>
if you can recall the formula for the sum of a geometric series.</p>
| 476
|
convolution
|
How to compute inner product of Wavelet transform convoluted with signal
|
https://dsp.stackexchange.com/questions/71735/how-to-compute-inner-product-of-wavelet-transform-convoluted-with-signal
|
<p>I have two datasets <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> in a sparse wavelet basis, and I have two filters <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span>. I’d like to compute the inner product of the convolutions <span class="math-container">$$\langle X_1 \star f_1, X_2 \star f_2\rangle$$</span> in the sparse wavelet basis. How do I this?</p>
<p>Thanks!
James</p>
| 477
|
|
convolution
|
Convolution of two sets
|
https://dsp.stackexchange.com/questions/72626/convolution-of-two-sets
|
<p>In my systems and signals course I had been asked a question about finding the convolution of two sets.
I was given:</p>
<p><span class="math-container">\begin{align}
x[n] &= \{3,2,1\}\\
h[n] &= \{1,-2,3\}\\
\text{Find}\quad y[n] & = x[n]\star h[n]
\end{align}</span></p>
<p>I have no idea on how to find the convolution of two sets. Any help is appreciated.</p>
|
<p>If it is difficult for you to remember or calculate the convolution of two sequences then you may try doing it as polynomial multiplication. <br/>
Think of <code>x[n]</code> and <code>h[n]</code> as polynomial coefficients. So we have <br/></p>
<pre><code>Px = 3x^2 + 2*x + 1
Ph = 1x^2 - 2*x + 3
</code></pre>
<p>Remember that linear convolution of two sequences is polynomial multiplication. Therefore <br/></p>
<pre><code>Py = Px * Ph
Py = (3x^2 + 2*x + 1) * (1x^2 - 2*x + 3)
Py = 3x^4 + 2x^3 - 6x^3 + x^2 + 9x^2 - 4x^2 + 6x - 2x + 3
Py = 3x^4 - 4x^3 +6x^2 + 4x + 3
</code></pre>
<p>Now writing the polynomial coefficients back in sequence format <br/></p>
<pre><code>y[n] = {3, -4, 6, 4, 3}
</code></pre>
| 478
|
convolution
|
Covolution of DTFT
|
https://dsp.stackexchange.com/questions/6211/covolution-of-dtft
|
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://dsp.stackexchange.com/questions/6199/convolution-and-dtft">Convolution and DTFT</a> </p>
</blockquote>
<p>$x_1(n)=x_2(n)=1$<br>
where $0 ≤ n ≤ N-1$</p>
<p>1)The linear convolution of the signal gives a triangle how you write it in mathematical form?</p>
<p>The DTFT of the signal is given as</p>
<p>$X_1(e^{jw})=X_2(e^{jw})=\pi \sum_{k=-\infty}^\infty \delta(w-2\pi k)+$$ \frac1 {1-e^{jw}}$</p>
<p>How to calculate the $ DTFT^-(X_1(e^{jw}).X_2(e^{jw})?$</p>
| 479
|
|
convolution
|
Reconstruction from Convolution
|
https://dsp.stackexchange.com/questions/8389/reconstruction-from-convolution
|
<p>I am trying to implement Wavelet transform in an embedded environment. The problem I face goes like this:</p>
<ol>
<li><p>Two vectors, A and B. A is input samples. B is vector co-efficients.</p></li>
<li><p>I take convolution of A and B, call it C.</p></li>
<li><p>Now I go back to B and up-sample the co-efficients. </p></li>
<li><p>Multiple C with new up sampled signal of B.</p></li>
</ol>
<p>Problem is I am constrained to up-sample only C, and not B. Is there a way that I can obtain a result where </p>
<pre><code>conv(upsample(B,2), C) == conv(upsample(C,2), B)
</code></pre>
| 480
|
|
convolution
|
Circular Convolution in Scilab
|
https://dsp.stackexchange.com/questions/10029/circular-convolution-in-scilab
|
<p>I am quite familiar with signal processing using Matlab. Now I am working in its open source alternative 'Scilab.' In Matlab, cconv() function is used to perform circular convolution between two sequences. I want to do circular convolution in Scilab. But I could not find an equivalent function for circular convolution in Scilab. Are there any other methods to perform circular convolution in Scilab (or, is there really a function for circular convolution in Scilab)?</p>
|
<p>I can't tell about SciLab but if it is not implemented, you can always get the circular convolution by</p>
<p>IDFT(DFT(x1) * DFT(x2))</p>
<p>where x1 and x2 are your signal vectors and the multiplication is element-wise. If the signals have a different length, zeropad the shorter one to the length of the longer one. This assumes that you want to do circular convolution with a modulus equal to the length of the longer vector.</p>
| 481
|
convolution
|
Simplification of a convolution-like expression
|
https://dsp.stackexchange.com/questions/10684/simplification-of-a-convolution-like-expression
|
<p>I am trying to further reduce the following expression (if there's a nicer way to write it):</p>
<p>$\sum_{k=-\infty}^{+\infty}\sum_{p=-\infty}^{+\infty} \sum_{l=-\infty}^{+\infty} w(k) \varphi_\mathrm{y}(l-p)\varphi_\mathrm{x}(k-n-l+p)$</p>
<p>where $\varphi_\mathrm{y}$ and $\varphi_\mathrm{x}$ are the autocorrelations of the signals $y(i)$ and $x(i)$, respectively.</p>
<p>I know I can express $\sum_{k=-\infty}^{+\infty} w(k) \varphi_\mathrm{x}(k-n-l+p)$ as a convolution (or a correlation, using the evenness of $\varphi_\mathrm{x}$) but I left the expression expanded in case it can help.</p>
<p>If necessary, I could take the simplified case of finite (M-samples-)length signals.</p>
<p>Regards</p>
|
<p>As you say, since $\varphi_x$ is an autocorrelation we have that
$$\sum_{k=-\infty}^{\infty} w(k) \varphi_\mathrm{x}(k-n-l+p) =
(w \star \varphi)\bigr |_{n+l-p} = g(n+l-p)$$
where $g = w \star \varphi$.
So, since $\varphi_y$ is also an autocorrelation, we have
$$\begin{align}
\sum_{l=-\infty}^{\infty} g(n+l-p)\varphi_y(l-p)
&= \sum_{l=-\infty}^{\infty} g(l+n-p)\varphi_y(p-l)\\
&=\sum_{m=-\infty}^{\infty} g(m)\varphi_y(n-m) &
\text{upon setting}~l+n-p=m,\\
&= g \star \varphi_y\bigr|_n = h(n)
\end{align}$$
where $h = g\star\varphi_y = w\star\varphi_x \star\varphi_y$.
So now you are left with $\displaystyle \sum_{p=-\infty}^\infty h(n)$ which
is unbounded. Are you sure that this last sum is with respect to $p$
and not with respect to $n$?</p>
| 482
|
convolution
|
Savitzky-Golay Filter Coefficients and Wikipedia
|
https://dsp.stackexchange.com/questions/10698/savitzky-golay-filter-coefficients-and-wikipedia
|
<p>I am trying to implement a Savitzky-Golay filter by following <a href="https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_smoothing_filter" rel="nofollow">this Wikipedia</a> page and in the first instance I have written the following Octave software code to create the convolution coefficients</p>
<pre><code>clear all
m = input( 'Enter number of bars (must be an odd number): ' ) ;
% check no_bars is odd
if mod( m , 2 ) == 0 % is even, report error
fprintf( '\nYou have entered an even number: try again.\n\n' ) ;
else % is odd
% create z
z = ( ( 1 - m ) / 2 : 1 : ( m - 1 ) / 2 )' ;
% create Jacobian matrix J for a cubic polynomial
J = ones( m , 4 ) ;
J( : , 2 ) = z ;
J( : , 3 ) = z .* z ;
J( : , 4 ) = z .* z .* z ;
% create convolution matrix C & show in terminal
C = inv( J' * J ) * J'
end
</code></pre>
<p>which gives a matrix output where the first line are the coefficients for the smooth, second line are coefficients for the 1st derivative, third line are coefficients for the 2nd derivative etc. However, when I check this program's output with the selected coefficients given in the Wikipedia page appendix there is disagreement.</p>
<p>To do this check I take the normalisation figures given in the tables, e.g. 35 for smoothing coefficients for a window of length 5 for a cubic fit, and multiply the matrix by this number, which will then transform the relevant row of the matrix into the coefficients given in the tables. Following this procedure I see that for the actual smoothing coefficients and the coefficients for the 1st derivative for window lengths 5, 7 and 9 for a cubic fit there is agreement. However, for the 2nd and 3rd derivatives there is no agreement. In fact to get agreement the normalisation figures in the table would need to be changed thus:</p>
<ul>
<li>window length 5 - 2nd deriv = 14, 3rd deriv = 12</li>
<li>window length 7 - 2nd deriv = 84, 3rd deriv = 36</li>
<li>window length 9 - 2nd deriv = 924, 3rd deriv = 1188</li>
</ul>
<p>Therefore I am now uncertain whether I have understood the Wikipedia page and have correctly implemented the above code, or whether in fact the Wikipedia page is wrong in this regard. Can anyone point out where the problem is?</p>
| 483
|
|
convolution
|
Convolving time reversed discrete input with time reversed discrete impulse response
|
https://dsp.stackexchange.com/questions/12690/convolving-time-reversed-discrete-input-with-time-reversed-discrete-impulse-resp
|
<p>Question 3.1 d from Chapter 3 of Oppenheim's Signals & Systems
I have </p>
<p>$$x[n]=(-1)^n{u[-n]-u[-n-8]}$$
$$h[n]=u[n]-u[n-8]$$</p>
<p>and the question asks for $y[n]$ which is the convolution of $x[n]$ and $h[n]$ namely </p>
<p>$$y[n]=x[n]\star h[n]$$</p>
<p>I'm stuck with the limits on the sum </p>
<p>$$\sum_{k=-\infty}^{\infty}{\left((-1)^{k}(u[-k]-u[-k-8])\right)\left(u[n-k]-u[(n-8)-k]\right)}$$</p>
<p>Should the limit be '$0$' or '$n-k$' or '$n-8$'?</p>
|
<p>Here is another example of how incredibly poor choice of notation misleads students everywhere. I do not possess a copy of any of the various tomes on signal processing that seem to be revered as the fifth Gospel on this site, and so
I do not know if the book cited by the OP actually has written $(1)$,
but I assert that it is against mathematical common sense to write things like
$$y[n] = x[n] \star h[n].\tag{1}$$
It is generally accepted mathematical convention that is a symbol
has the same meaning everywhere it appears in an equation or expression.
For example, when we write
$$y(t) = x(t)h(t),~ -\infty < t < \infty, \tag{2}$$
we mean by this that for <em>every</em> choice of real number $t$, the value
of $y(t)$ is the same as the product of the values of $x(t)$ and $h(t)$;
for example, $y(3) = x(3)h(3)$, and $y(313.012) = x(313.012)h(313.012)$,
ans so on and so forth.
So how are we supposed to make sense of this monstrosity $(1)$?
Clearly $y[3]$ is <em>not</em> $x[3]\star h[3]$ whatever meaning we might
ascribe to the latter quantity; the value of $y[3]$ depends on
a <em>lot</em> of other values of $x$ and $h$. The poor notation misleads
students into writing things like</p>
<p>$$\sum_{k=-\infty}^{\infty}{\left((-1)^{k}(u[-k]-u[-k-8])\right)\left(u[-n-k]-u[(-n-8)-k]\right)}$$</p>
<p>which is not the correct expression for the convolution (or the correlation,
for that matter) of
$$x[n]=(-1)^n{u[-n]-u[-n-8]}\quad \text{and} \quad
h[n]=u[n]-u[n-8]\tag{3}$$</p>
<p>The correct way to do this is write
$$y[n] = (x\star h)[n] = \sum_{k=-\infty}^\infty x[k]h[n-k]$$
and <em>then</em> substitute $n-k$ for $n$ in $h[n]=u[n]-u[n-8]$
to
get $h[n-k] = u[n-k]- u[n-k-8]$ and not $u[-n-k]-u[(-n-8)-k]$
the way that the OP has it. </p>
<p>Some initial thought might have
shown that (assuming that $u[n]$ is the unit step),
that $h[n]$ is $0$ for $n < 0$ (since then both $u[n]$ and $u[n-8]$
are $0$, and that $h[n] = 0$ for $n \geq 8$ (since then
both $u[n]$ and $u[n-8]$ are $1$. Consequently for any given
fixed integer $n$, $h[n-k]$ is nonzero only for
$$0 \leq n-k \leq 7 \Rightarrow n-7 \leq k \leq n$$
and so the limits on the sum could be reduced from
$-\infty$ and $\infty$ to $n-7$ and $n$ (cf. my comment
on the OP's question). I will leave it to the OP to figure
out whether the fact that $x[n]$ is <em>also</em> time-limited
can be used to further reduce the limits on the sum.
Hint: the answer to this might depend on the specific
value of $n$, and you may need to try several different
values of $n$, e.g. $0, 3, 8, 10, -40$ etc to see if you
can discern a pattern.</p>
| 484
|
convolution
|
How do I combine a copy of a signal with the original to achieve a chorus effect in a .wav file using Python?
|
https://dsp.stackexchange.com/questions/14514/how-do-i-combine-a-copy-of-a-signal-with-the-original-to-achieve-a-chorus-effect
|
<p>Can you please explain or point the right way to think about how to implement two sound effects in a .wav file.</p>
<p>So I read a .wav file and converted it to int values (in <code>numpy</code> array) </p>
<pre><code>array([59383, 58290, 60629, ..., 52606, 51982, 56274], dtype=uint16)
</code></pre>
<p>First, I am not sure I am clear about what these values in array really represent.
Is it correct that every value is one out of 65535 ( <code>unsigned int16</code> ) magnitude levels that analog output device will produce in some moment in time?</p>
<p>In theory chorus effect could be achieved by following steps:</p>
<ol>
<li>Make copy of original array </li>
<li>Weaken this copied array ( multiplying by some <code>value < 1</code> )</li>
<li>Add this copied array to original one</li>
</ol>
<p>In practice I don't know how to add them correctly. Should I simply add values one by one, make a convolution of the two arrays or interleave original array and its modified copy?</p>
<p>Also a chorus effect should have an <strong>arbitrary</strong> time <strong>delay</strong> and I don't know how to accomplish this.</p>
<p>How could I implement that randomization of the time delay for the data I am about to add? </p>
| 485
|
|
convolution
|
Finding linear convolution of two time series
|
https://dsp.stackexchange.com/questions/16414/finding-linear-convolution-of-two-time-series
|
<p>PROBLEM</p>
<p>Two real, causal time series</p>
<p>$$f_k = a^k \quad \text{and} \quad g_k = b^k$$</p>
<p>where $a \neq b$, $|a| < 1$, and $|b| < 1$, are given for $k = 0,1,2,...$</p>
<p>Find the linear convolution $h_n$ of the time series $f_k$ and $g_k$.</p>
<p>ATTEMPTED SOLUTION</p>
<p>I think perhaps my attempted solution here can be improved upon. We have from the definition:</p>
<p>$$h_n = \sum_{k=0}^{n} f_k g_{n-k} = \sum_{k=0}^{n} a^k b^{n-k} = b^n \sum_{k=0}^{n} \left(\frac{a}{b} \right)^k$$</p>
<p>If we let $R = \left(\frac{a}{b} \right)$, the last term can be written as:</p>
<p>$$h_n = b^n \cdot \frac{1 - R^{n+1}}{1 - R}$$</p>
<p>or</p>
<p>$$h_n = \frac{b^n - \frac{a^{n+1}}{b}}{1 - \frac{a}{b}}$$</p>
<p>$$h_n = \frac{b^{n+1} - a^{n+1}}{b-a}$$</p>
<p>I don't see how I can improve upon this answer though, but since we are given certain conditions in the problem, I have a feeling that it is possible to get a more elegant solution. I see that as $n \to \infty$, $h_n \to 0$, but is there any way I can "improve" on the answer for any $n$?</p>
|
<p>There is a small mistake in your solution. Here $f_k$ and $g_k$ are infinite duration signal, ie. $n \to \infty$. You made a mistake that you took <strong>n</strong> in $h_n$ and number of coefficients in $f_k$ or $g_k$ as same, but it is different.</p>
<p>$$h_n = \sum_{k=0}^{\infty} f_k g_{n-k} = \sum_{k=0}^{\infty} a^k b^{n-k} = b^n \sum_{k=0}^{\infty} \left(\frac{a}{b} \right)^k$$</p>
<p>$$h_n = b^n \cdot \frac{1}{1 - \frac{a}{b}}$$</p>
<p>$$h_n = \frac{b^{n+1}}{b-a}$$</p>
<p>If you want to finite the number of coefficients in $f_k$ and $g_k$, use other variables other than <strong>n</strong>.</p>
<p>Then <strong>n</strong> in the RHS of your equation will replace with the new variable.</p>
| 486
|
convolution
|
How to derive the answer to this convolution problem?
|
https://dsp.stackexchange.com/questions/18136/how-to-derive-the-answer-to-this-convolution-problem
|
<p>I came across this below question, (which was a homework assignment question for Signal Processing class, which my friend mailed me for help solving), mulled over it for an hour and had no idea how to proceed with solving it. </p>
<p>Let $C(x) = A(x)B(x)$ where:</p>
<p>$$A(x)=\sum_{n=0}^{N_1}a(n)x^n$$
$$B(x)=\sum_{n=0}^{N_2}b(n)x^{2n}$$
$$C(x)=\sum_{n=0}^{N_3}c(n)x^n$$ </p>
<p>Find expressions for $N_3$ and $c(n)$ as functions of $N_1$,$N_2$, the $a(n)$ and $b(n)$.</p>
<p>Apparently, it has something to do with convolution, as in $c(n)$ is the convolution result of $a(n)$ & $b(n)$ or something like that.<br>
But I still can't figure out how that is. Can anybody please explain me the answer of $N_3$ and $c(n)$</p>
|
<p><strong>HINT:</strong> what is the highest power of $x$ after multiplying $A(x)$ and $B(x)$? This gives you directly the value of $N_3$. Then rewrite $B(x)$ as</p>
<p>$$B(x)=\sum_{n=0}^{2N_2}\hat{b}_nx^n$$</p>
<p>and you can use normal convolution of $a(n)$ and $\hat{b}(n)$ to derive $c(n)$. Now you just need to express $\hat{b}(n)$ in terms of $b(n)$.</p>
<p><strong>EDIT:</strong> OK, so here's the solution:</p>
<p>We have</p>
<p>$$C(x)=\sum_{n=0}^{N_3}c_nx^n$$</p>
<p>Since $C(x)=A(x)B(x)$, the highest power of $C(x)$ must be $N_3=N_1+2N_2$. You can rewrite $B(x)$ as</p>
<p>$$B(x)=\sum_{n=0}^{2N_2}\hat{b}_nx^n$$</p>
<p>with</p>
<p>$$\hat{b}_n=\begin{cases}b_{n/2},&n \text{ even}\\
0,&n \text{ odd}\end{cases}$$</p>
<p>Now the coefficients $c_n$ can be written as the convolution of $a_n$ and $\hat{b}_n$:</p>
<p>$$c_n=\sum_k\hat{b}_ka_{n-k}$$</p>
<p>For the index $k$ in the above sum we have the following constraints:</p>
<p>$$0\le k\le 2N_2\quad\text{and}\quad 0\le n-k\le N_1$$</p>
<p>which results in the summation limits</p>
<p>$$c_n=\sum_{k=\max\{0,n-N_1\}}^{\min\{n,2N_2\}}\hat{b}_ka_{n-k},\quad 0\le n\le N_3$$</p>
<p>If you sum only over even $k$, you can replace $\hat{b}_k$ by the coefficients $b_{k/2}$:</p>
<p>$$c_n=\sum_{k=\max\{0,n-N_1\},k\text{ even}}^{\min\{n,2N_2\}}b_{k/2}a_{n-k},\quad 0\le n\le N_3$$</p>
<p>which can be rewritten once more as</p>
<p>$$c_n=\sum_{k=\lceil{\max\{0,n-N_1\}/2}\rceil}^{\lfloor{\min\{n,2N_2\}/2}\rfloor}b_{k}a_{n-2k},\quad 0\le n\le N_3$$</p>
| 487
|
convolution
|
Convolution of h(n) & x(n)
|
https://dsp.stackexchange.com/questions/18408/convolution-of-hn-xn
|
<p>can anyone please help me with finding Convolution for following problem .....</p>
<p>1) $h(n) = \delta[\sin(1 + |n|)]\quad \text{and}\quad x(n) = \sin(n^2)$</p>
<p>2) $h(n) = \delta[\sin(2\pi n/N)] \quad \text{and}\quad x(n) = n^2$ where $N$ is odd.</p>
|
<p><strong>HINT:</strong></p>
<p>I assume that $n$ is an integer and that we're talking about discrete-time signals. You need to simplify the expressions for $h(n)$ by noting that $\delta(n)=1$ for $n=0$ and zero otherwise. So, for which values of $n$ does $\sin(1+|n|)$ equal zero, and for which values of $n$ does $\sin(2\pi n/N)$ equal zero?</p>
| 488
|
convolution
|
Implementing overlap add method for convolution with a slice of the filter
|
https://dsp.stackexchange.com/questions/19591/implementing-overlap-add-method-for-convolution-with-a-slice-of-the-filter
|
<p>In <a href="http://en.wikipedia.org/wiki/Overlap%E2%80%93add_method" rel="nofollow">overlap add method for convolution</a> a filter $x$ of length $M$ will get convolved with a signal $y$ of length $N$ where $M << N$, i.e., $z = x * y $.</p>
<p>Here the signal $y$ is sliced into subparts each of length $M$.</p>
<p>Can I slice $y$ into subparts of size $P < M$ and then do the convolution by overlap add method. What should be the algorithm for this.</p>
<p>The method I have tried for overlap add method is given <a href="http://pastebin.com/kTnb6H8h" rel="nofollow">here(in MATLAB)</a>.
In the above code, in the for loop, I wish to have filterLength < 16.
How should I modify this code.</p>
|
<p>What you are describe is typically called "Partitioned Convolution". The benefit is reduced latency at some cost to efficiency (unless the larger size FFT is considerably less efficient e.g. due to cache size). This reduction in efficiency may be outweighed by the increased ability to use multiple cores for the parallel tasks.</p>
| 489
|
convolution
|
convolution of a real and complex signal
|
https://dsp.stackexchange.com/questions/19890/convolution-of-a-real-and-complex-signal
|
<p>How do we define convolution of: $$x(t)= \mathrm{sinc}(W t)$$ and $$y(t)= -j\mathrm{sinc}(2W t)$$ </p>
<p><strong>Answer:</strong> In the frequency domain, both of them are rectangular functions and multiplication of them is equal to the rectangular signal with smaller bandwidth i.e $x(t)$, which means convoluted signal is $\mathrm{sinc}(W t)$ with some scaling factor.</p>
<p>But I do not understand how this could be done, the two signals are orthogonal (because one is real and the other imaginary). So shouldn't convolution be 0? I would appreciate if someone could explain.</p>
|
<p>You are confusing several different notions. First of all, two signals $x(t)$ and $y(t)$ are orthogonal if</p>
<p>$$\int_{-\infty}^{\infty}x(t)y^*(t)dt=0\tag{1}$$</p>
<p>From (1) it is clear that if two real-valued functions $x(t)$ and $y(t)$ are <em>not</em> orthogonal, i.e. the integral in (1) is not equal to zero, then also $x(t)$ and $jy(t)$ are not orthogonal. So your argument that the two given signals must be orthogonal because one of them is real-valued and the other one is imaginary is wrong.</p>
<p>Furthermore, even if two signals are orthogonal to each other, their convolution is usually non-zero. Just take two rectangular functions that do not overlap. Obviously, they are orthogonal, but their convolution is a triangular function.</p>
<p>Consequently, one way to interpret the convolution of the two sinc functions is that a low pass signal with cut-off frequency $W$ is filtered by a low pass filter with cut-off frequency $2W$, and multiplied by the (irrelevant) factor $-j$. Since the low pass filter has no effect on the low pass signal, the only change to the signal is the constant factor $-j$.</p>
| 490
|
convolution
|
Convolution of one signal with an evenly spaced signal
|
https://dsp.stackexchange.com/questions/23189/convolution-of-one-signal-with-an-evenly-spaced-signal
|
<p>I do not specialize in signal processing so I wonder if there is any references to the following procedure.</p>
<p>Let $[n]=\{0,1,\ldots,n\}$.</p>
<p>Consider the function $f:[n] \to \mathbb{R}$ and $g:[kr] \to \mathbb{R}$, such that $\operatorname{supp} g = \{0,r,2r,3r,\ldots,kr\}$. Assume for simplicity, $n/r$ is a integer.</p>
<p>We are interested in computing the linear convolution $f*g$.</p>
<p>If $r$ is large, one way to do the computation is partition $f$ into $r$ functions $f_0,\ldots,f_{r-1}$, such that $f_i(t) = f(tr+i)$, and $f_i:[n/r]\to \mathbb{R}$.</p>
<p>We also define $\bar{g}:[k]\to \mathbb{R}$ as $\bar{g}(t) = g(rt)$. </p>
<p>Find $f_i*\bar{g}$ to each $i$, then we can read off $(f*g)(tr+i) = (f_i*\bar{g})(t)$.</p>
<p>This make sure we apply $r$ convolutions on sequence of length $n/r$ and $k$. Instead of $1$ convolution on sequence of length $n$ and $kr$. </p>
|
<p>This is known as <a href="http://www.ece.unm.edu/faculty/bsanthan/ece539/note3.pdf" rel="nofollow">polyphase decomposition</a>. It is often used as en efficient implementation of filtering combined with decimation or interpolation.</p>
| 491
|
convolution
|
Limits of convolution
|
https://dsp.stackexchange.com/questions/26379/limits-of-convolution
|
<p>Say we have a function of time ($S(t)$) of the length $T$, and then a customized impulse response (say $I(t)$) of the length $T+N$. The question is, when $S(t)$ is convolved with $I(t$), what are the possible frequency domain (or alternately time domain) representation of all of the possible resulting functions?</p>
<p>For the case when the convolution is performed circularly, the solution appears immediately clear. If $S(t)$ has zero magnitude components in the frequency domain, they will stay zero no matter how the impulse response is configured. Thus, it's impossible to for example, convert a sinusoidal waveform into any non-sinusoidal waveform using circular convolution.</p>
<p>But the result is a lot less clear for the case when linear convolution is performed. If we limit our horizon to the period of $T$, and completely disregard the tail that the impulse response will generate (for cases where $T>1$, and length of the impulse response $ > 1$ ), what will the answer be then? Are there any limits, or can you transform any function into any other function (provided the tail is thrown away), using linear convolution?</p>
<p>(EDIT: for non-casual impulse responses the "pre-echo" is also thrown away, time shifting is allowed also for such cases).</p>
|
<p>If the function is zero-valued throughout then so will be the output. But if you exclude this trivial homogeneous case, then your question can be reformulated as:</p>
<p>Given any function $f$ of compact support and any list $l$ of frequency domain zeros, does a function $g$ always exist that equals $f$ within the support of $f$ and has a Fourier transform that has the zeros given in $l$, with other zeros allowed?</p>
<p>Starting at $g_0 = f$, it would be enough to demonstrate a procedure $p$ such that $g_{i+1} = p(g_i, l_i)$ that adds a single given frequency domain zero $l_i$ to function $g_i$ without modifying it within or to the left of the support of $f$ and without altering the existing frequency domain zeros. We choose a causal filter as that procedure. The procedure being a filter guarantees that existing frequency domain zeros are preserved. To not change the function within or to the left of the support of $f$, the filter must have an impulse response that has a Dirac delta function followed by a zero-valued segment of the same length as the support of $f$. A comb filter consisting of two Dirac delta functions, the second one inverted and located after the zero-valued segment, matches this description. The comb filter can be made to have a zero at any wanted frequency by placing the inverted Dirac delta to an integer multiple of the period of that frequency. Choosing a large enough multiplier moves the inverted Dirac delta to outside the zero-valued segment.</p>
<p>If the functions are discretely sampled sequences rather than functions of a real argument, a Dirac delta function that lands between the sample points gives a sampled sinc function that has no compact support. To circumvent this problem, instead of a sinc impulse, a fractional, unity gain delay at any given frequency can be created by setting to certain values the two samples surrounding a multiple of the period of that frequency. The delay and gain at other frequencies will be off but this matters not.</p>
<p>So (unless I missed something), the answer to your question is: No, there are no limitations if the (allowed to be long enough) tail is thrown away.</p>
<p>Note that successive application of the above procedure does not generate the impulse response of your original question but an output (one of many possible) that enables a realizable impulse response.</p>
| 492
|
convolution
|
Approaching the response of an RC circuit via y=ifft(fft(x)fft(h)). Why is this valid?
|
https://dsp.stackexchange.com/questions/26924/approaching-the-response-of-an-rc-circuit-via-y-ifftfftxffth-why-is-this
|
<p>Some textbooks like (Numerical recipes the art of scientific computing) derive the DFT as a Riemann sum of the CTFT. With this in mind it would be natural then to approximate the identity $$y(t)=x*h=\mathcal{F}^{-1}\big\{XH\big\}$$</p>
<p>with the mathlab code y=ifft(fft(x).*fft(h)) which roughly means that my response is the inverse DFT of the product of the DFTs. </p>
<p>I have been reading recently that this approach isn't really valid in the case of the DFT. The relevant identity in the discrete realm is $$\mathcal{F}^{-1}(XH)_{n}=\sum_{l=0}^{N-1} x_l (y_N)_{n-l}$$ which is called a circular convolution.</p>
<p>But so far, my ifft(fft...) approach yields results that are completely compatible with the analytical results.</p>
<p>I also want to mention that I have checked for the well-known result that convolving with a shifted impulse, shifts your response along the domain; and the result that scaling an impulse, scales the response.</p>
<p>I've implemented impulses by inputting the coefficient in the dirac-delta function in some position on a vector, without really justifying why this works.</p>
<p><strong>So therefore I have two questions:</strong></p>
<p>Why is the y=ifft(fft(x).*fft(h)) approach valid and compatible with the theory? Is this the way its supposed to be? How can I justify such a thing? Why aren't the step factors $dt$ involved in this approach?</p>
<p>Why are impulses correctly modeled by only inputting the coefficient that would correspond to the dirac-delta function? (This seems intuitive to me, but I wouldn't like to say my opinion as I would prefer to listen to you mostly).</p>
<p>Thanks!</p>
<p>Edit: <strong>This is the problem I am working on</strong></p>
<p>Using MATLAB, I've been told to use the convolution theorem to find the response of an RC circuit given a signal $x(t)=\displaystyle \sum_i \delta(t-t_i)$ and $x(t) = \cos(t)$ (and other similar ones like $x(t)=e^{-t}\cos(t)$, but this isn't important). </p>
<p>The purpose of this exercise is not to solve things analytically, but to use MATLAB. I specifically want to do this with the fft and ifft functions in MATLAB. </p>
<p>The way I'm proceeding is by writing something like:</p>
<pre><code> t=(-5:.01:5);
x=[zeros(1,500) 1 zeros(1,500)];
h=heaviside(t).*exp(-t);
plot(t,ifft(fft(h).*fft(x)));
</code></pre>
<p>Which works perfectly fine. If I want to analyse $x(t)=\cos(t)$</p>
<p>I just modify my code the following way:</p>
<pre><code>t=(-5:.01:5);
x=cos(t);
h=heaviside(t).*exp(-t);
plot(t,.01*ifft(fft(h).*fft(x)));
</code></pre>
<p>which again, works fine. </p>
<p><em>I'll repeat my questions for the purpose of clarity:</em></p>
<p>Why is this valid? This convolution identity isnt valid in the discrete case. Also, the interval $(-5,5)$ is not at all similar to $(-\infty,\infty)$ so my Riemann approximation should suck (for the cosine case), but it doesn't.</p>
<p>Can anyone explain why using the step only in front of the<code>ifft</code> is what works here? Namely why shouldn't I be doing <code>01*ifft(.01*fft(h).*fft(x)*.01)</code> which seems more in line with the expression of the CTFT.</p>
| 493
|
|
convolution
|
Convolution & FT properties
|
https://dsp.stackexchange.com/questions/28958/convolution-ft-properties
|
<p>I have been trying hardly to check if these equalities are true or false. However, I have not been able to conclude anything. Could you help me, please?</p>
<p>$$y[2n]=h[n]\star x[2n] $$</p>
<p>$$\mathfrak{F}(x[-n])e^{-jp\pi Fm}=\mathfrak{F}(x[-n+m]) $$</p>
|
<p>I'll try to get you started.</p>
<p>For the first part:</p>
<ol>
<li>Write out the convolution sum for $y[n]=h[n]\star x[n]$</li>
<li>Replace $n$ by $2n$, which gives you an expression for $y[2n]$</li>
<li>Write out the convolution sum for $h[n]\star x[2n]$</li>
<li>Check if both expressions are equal. If you've done everything right, the answer should be 'no' (i.e., they're not equal). If you fail showing this, add your steps to your question and explain what the problem was.</li>
</ol>
<p>For the second part:</p>
<ol>
<li>Write the Fourier transform of $x[-n]$ in terms of the Fourier transform of $x[n]$, and multiply by $e^{-j2\pi fm}$; that's the left-hand side of the equation.</li>
<li>Compute the Fourier transform of the right-hand side. <a href="https://dsp.stackexchange.com/questions/27955/z-transform-of-xa-n-where-a-is-int/27971#27971">This answer</a> will be helpful (replace $z$ by $e^{j2\pi f}$).</li>
<li>Like step 4 above, just that in this case the answer should be 'yes' (i.e., they're equal).</li>
</ol>
| 494
|
convolution
|
Do I have to flip my kernel when performing an FFT-based convolution?
|
https://dsp.stackexchange.com/questions/29065/do-i-have-to-flip-my-kernel-when-performing-an-fft-based-convolution
|
<p>I have a 2D image and I convolve it with a 2D kernel image using FFT. So far I was always using symmetric kernels (e.g., Gaussian with stddev_x = stddev_y). As a result, I never bothered thinking whether I have to flip my kernel image or not because it wouldn't have made any difference. However, now I want to convolve my image using an elliptical Gaussian kernel with stddev_x != stddev_y and an arbitrary angle. Now, do I have to flip my kernel image prior the FFT convolution? Or the flipping is required only when using the usual convolution algorithm and not the FFT-based one?</p>
<p>Thank you.</p>
|
<p>Flipping & dragging is an animative method used in graphical computation of convolution in time (or space) domain. It is the result of an argument manipulation in $h[n-k]$ (or $x[n-k]$) signal animated as a function of n, but drawn on an axis of k, in the convolution sum: $$y[n] = x[n]*h[n]=\sum {x[k]h[n-k] }$$ On the other hand in frequency domain DFT (FFT) based convolution implementations you don't need it as you simply multiply the two DFTs.</p>
| 495
|
convolution
|
Concept check on doing multiply in time with circular convolution in frequency domain
|
https://dsp.stackexchange.com/questions/29100/concept-check-on-doing-multiply-in-time-with-circular-convolution-in-frequency-d
|
<p>So I think I have my thinking correct on what I'm trying to do, but I just want to make sure I'm not assuming something that isn't true. Below is what I'm currently doing in the time domain.</p>
<p>$r(t)$ is my received signal<br>
$T$ is my period<br>
$MultCurrent = r(n)\cdot r(n-T)$<br>
$MultPrevious = r(n-T)\cdot r(n-2T)$<br>
$Average(i) = Average(i-1) + MultCurrent - MultPrevious$ </p>
<p>The average should be acting as a low pass filter. I'm using this to get my original data message back. This is working for me, but I'd like to do things in the frequency domain. Am I doing this correctly?</p>
<p>I'm assuming $x(n) \cdot y(n)$ is equivalent to $X(f)\star Y(f)$ where the convolution is a circular convolution.</p>
<p>$N= 32$<br>
$cconv()$ is the $N$ point circular convolution of two sequences<br>
$fft1 = fft(r(n)$ from $n$ to $N)$<br>
$fft2 = fft(r(n-T)$ from $n-T$ to $N)$<br>
$fft3 = fft(r(n-2T)$ from $n-2T$ to $N)$<br>
$ConvCurrent = cconv(fft1,fft2)$<br>
$ConvPrevious = cconv(fft2,fft3)$<br>
$Average(i) = Average(i) + ConvCurrent(0) - ConvPrevious(0)$ </p>
<p>$ConvCurrent(0)$ is bin 0 of the result of the circular convolution (DC component). This is because i'm wanting the low pass filtered result. Is this the correct way to think about it? From what i'm thinking, i should need to transform back into the time domain. Also, if i'm wanting to filter my results before i do the circular convolution, can i just zero out all the frequency bins except for the frequency I care about? I'm not sure if this would cause any problems. Thanks a bunch!</p>
| 496
|
|
convolution
|
An effective algorithm for convolution of very many long signals
|
https://dsp.stackexchange.com/questions/32445/an-effective-algorithm-for-convolution-of-very-many-long-signals
|
<p>I need to calculate the following linear convolution </p>
<p>$$ y[n] = h_1[n] \circledast h_2[n] \circledast h_3[n] \circledast \cdots \circledast h_{k-1}[n] \circledast h_k[n]$$ </p>
<p>where $k$ exceeds $5000$ and the length of each $h_i[n]$ exceeds $100,000$. I tried the method which is based on the circular convolution/DFT but it does not work because each $h_i[n]$ must be zero padded so DFTs then takes too much time and memory. Are there any other algorithm available? </p>
|
<p>So, I'm becoming a bit notorious with the whole "by using commonly available tools, how complex is your problem <em>really</em>?" business, but oh well:</p>
<p>I went ahead and simulated the "last" of your convolutions, i.e. I went ahead and picked up a FFT FIR (which is convolution by the overlap-save method Jason R <a href="https://dsp.stackexchange.com/questions/32445/an-effective-algorithm-for-convolution-of-very-many-long-signals#comment63008_32445">mentioned</a>) by dumping 500 Million random samples through a filter of 100,000 taps:</p>
<p><a href="https://i.sstatic.net/J7Rhn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J7Rhn.png" alt="Flow graph"></a></p>
<p>So, this flow graph produces an average total of 42 MS/s, or 420 (500x100,000) convolutions per second. The complete program, which includes initial FFT of the taps, calculating of the taps and the random pool, loading of libraries and set up of all, takes about 12 seconds. So, yes, doing this exact operation 5000 times would exceed the average coffee break's duration, and take nearly 17h. </p>
<p>I tried timing the initial FFT of the tap vector (which the FFT filter blocks automatically), but I failed - it took much shorter than producing the taps vector itself. So I made a separate flow graph that just does 100,000-transforms of input data, and includes all the data generation/copying-overhead in my speed estimate. With that flow graph, my PC does 1900 FFTs per second, so it should take roughly half a millisecond to do a single one, or 2.5 s to do 5,000 of those.</p>
<p>Which also means that if you're smart, and just used the overlap-add algorithm as <a href="https://github.com/gnuradio/gnuradio/blob/v3.7.10/gr-filter/lib/fft_filter.cc" rel="nofollow noreferrer">contained in the FFT FIR block</a> and hoped that performance scales roughly linear with length of the input vector you're transforming, then every convolution would take about $\frac{12}5\text{ ms}= 2.4\text{ ms}$ longer than the previous, giving us a rough estimate of $\sum\limits_{n=0}^{5000} n\cdot 2.4\text{ ms}\approx 8\text{ h},\,25\text{ min}$.</p>
<p>Obviously, that leaves a bit of headroom for optimization. Now, Operlap-Add and Overlap-Save should actually scale exceptionally well to multi-core/distributed solutions. Also, doing a set of long FFTs also sounds like something that GPUs are great at.</p>
| 497
|
convolution
|
Convolution and DFT size
|
https://dsp.stackexchange.com/questions/32449/convolution-and-dft-size
|
<p>Assuming that filtered signal $y(n)$ is given as</p>
<p>$$
y(n)=x(n)\star h(n)=\sum_{m=0}^{2N-1}x(m)h(n-m), \quad n \in[0,1,\ldots, 2N]
$$</p>
<p>where $\star$ is convolution parameter. $x(n)$ is discrete signal of lenght $2N$ and $h(n)$ is filter of the same lenght as $x(n)$. If I want to take for example $N$-point FFT of $y(n)$ how it would look like</p>
<p>\begin{align}
DFT_{N}(y(n))=\sum_{n=0}^{N-1}(x(n)\star h(n))&e^{-j\frac{2\pi nk}{N}}=\sum_{n=0}^{N-1}\sum_{m=0}^{\mathbf{2N-1}}(x(m)h(n-m))e^{-j\frac{2\pi nk}{N}}\\
&\textrm{or}\\
DFT_{N}(y(n))=\sum_{n=0}^{N-1}(x(n)\star h(n))&e^{-j\frac{2\pi nk}{N}}=\sum_{n=0}^{N-1}\sum_{m=0}^{\mathbf{N-1}}(x(m)h(n-m))e^{-j\frac{2\pi nk}{N}}\quad ?
\end{align}</p>
|
<p>Note that if $x[n]$ and $y[n]$ are non-zero in the interval $n\in [0,2N-1]$ (and, for convenience, they are defined as being zero outside that interval), the (linear) convolution sum can be written as</p>
<p>$$y[n]=(x\star h)[n]=\sum_{m=0}^nx[m]h[n-m],\qquad 0\le n<4N-1\tag{1}$$</p>
<p>Note that the upper limit is $n$ because the argument of the term $h[n-m]$ becomes negative for $m>n$.</p>
<p>So if, for whatever reason, you're only interested in $y[n]$ in the interval $n\in [0,N-1]$, you can set the upper limit in the convolution sum $(1)$ to $N-1$ (assuming that $x[n]$ and $h[n]$ are defined as zero outside the interval $n\in [0,2N-1]$). However, the result wouldn't change if you chose the upper summation interval as $2N-1$, because you would just add zeros to the result.</p>
<p>In short, both formulas in your question give the same result, under the given assumptions on the definitions of $x[n]$ and $y[n]$.</p>
| 498
|
convolution
|
Convolving complex exponential with box function (discrete)
|
https://dsp.stackexchange.com/questions/34195/convolving-complex-exponential-with-box-function-discrete
|
<p>If I have $$x[n] = u[n] - u[n-4]$$ where $u[n]$ is the unit step function, and $$h[n] = e^{-i\frac{\pi}{2}n}$$</p>
<p>does $x[n] * h[n] = 0$? </p>
<p>I tried doing the convolution sum and I got: $1 - i - 1 + i = 0$ and I also just tried plotting points and think I got 0 also, but I'm not sure if I did either of these correctly. Is there any easier or intuitive way to think about convolving a discrete complex exponential with a function, or (if it's easier the other way around), convolving a discrete box with another function?</p>
|
<p>The discrete-time Fourier transform (DTFT) of a length $N$ "discrete box" sequence starting at $n=0$ is</p>
<p>$$\sum_{n=0}^{N-1}e^{-jn\omega}=e^{-j(N-1)\omega/2}\frac{\sin\left(\frac{N\omega}{2}\right)}{\sin\left(\frac{\omega}{2}\right)}\tag{1}$$</p>
<p>The numerator of $(1)$ has zeros at frequencies</p>
<p>$$\omega_{0,k}=\frac{2k\pi}{N}\tag{2}$$</p>
<p>You can interpret the convolution as filtering the signal $x[n]=e^{-jn\pi /2}$ with a filter with its frequency response given by $(1)$. With $N=4$ you get zeros of the frequency response at $\omega_{0,k}=k\pi /2$, so the complex exponential with frequency $\omega=-\pi/2$ is completely suppressed because the filter's frequency response has a zero at exactly that frequency.</p>
| 499
|
spectral analysis
|
Spectral Analysis vs. Spectral Line Analysis
|
https://dsp.stackexchange.com/questions/26021/spectral-analysis-vs-spectral-line-analysis
|
<p>We usually talk about "spectral analysis" but some resources (<a href="http://www.sciencedirect.com/science/article/pii/016516849390090W" rel="nofollow">this paper</a> or <a href="http://www.alma.inaf.it/images/Moments.pdf" rel="nofollow">this doc</a>) talk about "spectral line analysis".</p>
<p>Does this make sense to you, i.e. are the 2 fields actually different or the 2 names refer to the same thing?</p>
|
<p>Spectrum analysis is more general: it involves looking at the entire spectrum of a given signal.</p>
<p>Spectral line analysis assumes that the spectrum contains several peaks (lines) of interest at specific frequencies. The aim then is to find the precise frequency, magnitude, and phase of those peaks (lines).</p>
| 500
|
spectral analysis
|
Unexpected peaks in spectral analysis
|
https://dsp.stackexchange.com/questions/35593/unexpected-peaks-in-spectral-analysis
|
<p>I am doing the spectral analysis of some antenna recorded signals by computing PSD (logarithmic absolute value of the FFT) and the spectrogram of the signal. The problem is that I get some unexpected spikes in the signal that appear through all the signal, as it can be seen in the next figure:</p>
<p><a href="https://i.sstatic.net/Dd2oG.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dd2oG.jpg" alt="enter image description here"></a></p>
<p>The two peaks that appear are the spikes I am speaking about, and they are the same in all the signal. The center spike is not located in $0\textrm{ Hz}$, so it is not a DC offset of the signal. This is giving me some problems, so I would like to know what can they be or how can they be removed (maybe I am not computing the spectral analysis correctly). </p>
<p>I have also performed Welch PSD estimations, and the peaks still appear. I have been told that it can be some internal interference form the electronics recording the data or some issue with ADCs.</p>
|
<p>The large spike at ~9MHz could be some form of interference from electronics near the receiver or internal to the receiver (maybe from using a poorly conditioned power source). My best guess for the spike at DC is a DC bias from quantization. If the digitizer truncates instead rounds, the resulting digital signal will have a non-zero mean and therefore a DC offset. We had this happen in an FPGA downcoverter implementation at a previous job. Once they changed the FPGA to round, the DC offset went away. If you have ever used an Ettus USRP radio, you see the same DC offset in many of those boxes.</p>
| 501
|
spectral analysis
|
Spectral analysis of positive signals
|
https://dsp.stackexchange.com/questions/9220/spectral-analysis-of-positive-signals
|
<p>Suppose that I have a sensor that can acquire samples $X[k]$ of the Fourier transform of an unknown signal $Y[t]$. An example is MRI, where the acquired data is in $k-$space. Now suppose that the unknown signal $Y[t]$ is known to be real and non-negative. My question is: is there a principled way to incorporate this knowledge into the spectral analysis algorithm that will estimate $Y[t]$ from $X[k]$, in order to produce an estimate with less bias or variance? I am thinking at non-parametric spectral estimation algorithms. A naive way of course would be to take the real part of $Y[t]$ and clip the negative values, but this does not seem to be optimal. I am looking for some sort of Cadzow's denoising method for spectral data. </p>
|
<p>To give a complete answer to this question you're going to need to provide more details about the kind of models you're considering in the first place. But yes, in many cases you can augment those models with <em>a priori</em> constraints on $Y[t]$, such as $0 \leq Y[t] \leq 1$. </p>
<p>For example, if the standard model has some sort of least-squares structure, then adding constraints of that type turns the problem into a bound-constrained least squares problem. There are a variety of approaches to solving such problems, and while they are more expensive than standard least squares, they are quite tractable. And it's very likely that such constraints will produce a better reconstruction.</p>
<p>Even without knowing more, though, I will say this: if your modeling approach does not produce real signals <em>naturally</em>, then you are almost certainly using the wrong modeling approach. It concerns me that you are even proposing taking the real part of the output of some other model. You should be searching the space of real signals if you know that to be the underlying structure.</p>
| 502
|
spectral analysis
|
Chi-squared distribution variable spectral analysis
|
https://dsp.stackexchange.com/questions/56226/chi-squared-distribution-variable-spectral-analysis
|
<p>I an interested in spectral analysis of a random signal.</p>
<p>Assume the signal <span class="math-container">$n(t)\sim\mathcal{N}(0,\sigma^2)$</span> is followed by the white Gaussian noise with zero mean and <span class="math-container">$\sigma^2$</span> variance.</p>
<p>The test signal is <span class="math-container">$z(t)=n^2(t)$</span>.</p>
<p>Obviously, <span class="math-container">$z(t)$</span> is the Chi-squared distribution with 1 degree of freedom. </p>
<p>The histogram of <span class="math-container">$z(t)$</span> with <span class="math-container">$\sigma^2=1$</span>.</p>
<p><a href="https://i.sstatic.net/2mPrP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2mPrP.jpg" alt="waveform of z(t)"></a></p>
<p>The Normalized FFT spectrum of <span class="math-container">$z(t)$</span> is shown following:</p>
<p><a href="https://i.sstatic.net/jN0l6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jN0l6.jpg" alt="FFT spectrum"></a></p>
<p>The spectrum has an impulse at the zero frequency and a flat platform over the other frequencies. Obviously, the impulse is caused by the DC component in <span class="math-container">$z(t)$</span> with <span class="math-container">$E(z)=\sigma^2$</span>.</p>
<p>My question is: is there any possible mathematical description of <span class="math-container">$z(t)$</span>? </p>
| 503
|
|
spectral analysis
|
Spectral analysis on polling data
|
https://dsp.stackexchange.com/questions/66314/spectral-analysis-on-polling-data
|
<p>First-time questioner...</p>
<p>I'm writing an application (hobby/non-commercial, asp.net c#) to process polling data (answers to a single yes/no question) automatically collected from a website. The answers accumulate (yes = +1, no = -1) in a running total that ticks upwards and downwards over time. The application will display this data stream in a chart in real-time as answers are submitted, as well as - hopefully - provide some deeper analysis in, or near, real-time.</p>
<p>The goal is to track the time of each data point and get at any cyclical tendencies or directional movement over time. I'd like to try to do spectral analysis on the raw feed of data, which I understand is a naturally-occurring discrete-time sequence, one that's also non-equispaced in time since the data come in sporadically, like a political debate audience mashing yes and no buttons as they see fit through the course of the discussion.</p>
<p>My cursory initial investigation has led to me understand that linear shift/time-invariate systems generally require samples at integer spaced intervals (and that they <em>can</em> work for non-time-equispaced data, but care would need to be taken in the design and use of such systems). I'd also like to avoid interpolating between the discrete data points, as a workaround, so as not to make up information.</p>
<p>I'm optimistic this isn't an impossible dream, and that there's a way to process this data as it occurs naturally. <strong>If so, would NFFT be the right set of tools to use? Is there a tool in particular, or some altogether different processing paradigm, that would apply best?</strong> I'm a lay person to the field; any help is appreciated on asking my question most precisely.</p>
| 504
|
|
spectral analysis
|
Power spectral analysis in baseband vs bandpass
|
https://dsp.stackexchange.com/questions/85115/power-spectral-analysis-in-baseband-vs-bandpass
|
<p>I have two signals <span class="math-container">$x_1(t)$</span> and <span class="math-container">$x_2(t)$</span> both with a baseband bandwidth of <span class="math-container">$W$</span> Hz. <span class="math-container">$x_1(t)$</span> is transmitted over carrier frequency <span class="math-container">$f_1$</span> Hz, and <span class="math-container">$x_2(t)$</span> over carrier frequency <span class="math-container">$f_2=f_1 + \frac{W}{2}$</span> Hz. The received signal can be written as</p>
<p><span class="math-container">$$
\begin{split}
r(t)=&\Re\left\{x_1(t)e^{j2\pi f_1t}+x_2(t)e^{j2\pi f_2t}\right\}\\
=&\Re\left\{e^{j2\pi f_1t}\left[x_1(t)+x_2(t)e^{j2\pi (f_2-f_1)t}\right]\right\}
\end{split}
$$</span><br />
The baseband signal <span class="math-container">$x_1(t)$</span> spans <span class="math-container">$[-W,W]$</span>, while <span class="math-container">$x_2(t)e^{j2\pi(f_2-f_1)t}$</span> spans <span class="math-container">$[-\frac{W}{2},\frac{3W}{2}]$</span>. I want to calculate the power of <span class="math-container">$x_2(t)e^{j2\pi (f_2-f_1)t}$</span> that interferes with <span class="math-container">$x_1(t)$</span> from the power spectral density (PSD) of the aforementioned signal.</p>
<p>I know to find the power of <span class="math-container">$x_1(t)$</span> from PSD, I need to find the area under the PSD in <span class="math-container">$[0, W]$</span>, and multiply it with 2 to account for the negative frequencies. Where I am not sure is whether to calculate the power of <span class="math-container">$x_2(t)e^{j2\pi (f_2-f_1)t}$</span> from <span class="math-container">$[\frac{-W}{2}, W]$</span> or the power in <span class="math-container">$[0, W]$</span>, and if I need to double any part of the frequencies?</p>
<p>In the bandpass it's clearer, since <span class="math-container">$x_2\cos\left(2\pi f_2t\right)$</span> interferes with <span class="math-container">$x_1\cos(2\pi f_1 t)$</span> in the frequency range <span class="math-container">$[f_1-\frac{W}{2}, f_1+W]$</span>. However, to find the PSD in the bandpass, I need to use a sampling frequency greater than <span class="math-container">$2f_1+W$</span>, which is very large in case <span class="math-container">$f_1$</span> is in GHz.</p>
<p>To sum-up:</p>
<ul>
<li>Which is better to do the spectral analysis in baseband or bandpass</li>
<li>How to deal with spectral of <span class="math-container">$x_2(t)$</span> in baseband in the example above?</li>
</ul>
|
<p>It should not matter whether you perform your PSD estimation in baseband or not. Basebanding a signal is carried out with a complex demodulation, which simply shifts the frequency of the measurement. For a bandlimited signal, appropriate low-pass filtering will practically prevent amplitude distortion. Just make sure your integration bounds are consistent with the frequency shift of the demodulation. For example, if you were finding the PSD of <span class="math-container">$x_1(t)$</span> and demodulating by <span class="math-container">$f_1$</span>, your bounds would shift from <span class="math-container">$[f_1-W,f_1+W]$</span> to <span class="math-container">$[-W, W]$</span>.
PSD estimation typically only requires multiplying the Fourier transform by its conjugate - not scaling it by a factor of two. There are many PSD estimators out there, many of which have been discussed on DSP.SE.</p>
<p>Separating <span class="math-container">$x_1(t)$</span> from <span class="math-container">$x_2(t)$</span> spectrally is impossible unless you have prior information about those components, can make reasonable assumptions about the spectral structure, or can measure them individually at some point to aid you in either of those tasks. If you know nothing about the signals, how can you tell the difference between the signal you stated and a single signal without separable components and a bandwidth of <span class="math-container">$[-W, 3W/2]$</span>?</p>
| 505
|
spectral analysis
|
How to use function spectrum in R to perform spectral analysis
|
https://dsp.stackexchange.com/questions/47271/how-to-use-function-spectrum-in-r-to-perform-spectral-analysis
|
<p>I intend to perform spectral analysis with <code>R stat:spectrum</code> and then plot power spectrum.</p>
<p>My code is listed here:</p>
<pre><code> t <- seq(0,1024,by=0.1)
x <- cos(2*pi*t) + 0.75*sin(2*pi*4*t) + 2*sin(2*pi*6*t)
x.spec <- spectrum(x,log="no",span=5,plot=FALSE)
del <- 0.1
spx <- x.spec$freq/del
spy <- 2*x.spec$spec
plot(spy~spx,xlab="frequency",ylab="spectral density",type="l")
</code></pre>
<p>the signal has 3 periodic components with 1 Hz, 4 Hz, 6 Hz, but the final spectrum only shows the first 2 frequencies, without 6 Hz. And I don't know why the range of <code>x.spec$freq</code> is [0,0.5] before scaling.</p>
<p>Please tell me my possible concept misunderstanding and guide me how to plot the spectrum correctly. Thank you very much.</p>
<p><a href="https://i.sstatic.net/g5hVA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5hVA.png" alt="enter image description here"></a></p>
|
<p>Hi: Your sampling frequency is $f_{s} = 10 $ but the frequency of the third component is $f_{3} = 6$.. Therefore, the nyquist criterion for the maximum frequency is not being met because $2 * 6 = 12$ which is not less than $f_{s} = 10$. I'm new at this but hopefully this is the reason why. Someone will surely correct me if not.</p>
| 506
|
spectral analysis
|
Most appropriate spectral analysis method for a recording of a saxophone note?
|
https://dsp.stackexchange.com/questions/35224/most-appropriate-spectral-analysis-method-for-a-recording-of-a-saxophone-note
|
<p>I'm currently doing an investigation where I am attempting to resonant frequencies of an Alto Saxophone at various different notes. I have taken audio recordings of the notes with a sample rate of $44100\textrm{ Hz}$, and exported them into CSV files. Now, I am attempting to analyze this data using the signal processing plugin for MATLAB, but I'm feeling a little bit overwhelmed with all the different options, as I'm not sure which spectral analysis method is most suitable for the my task.</p>
<p>In MATLAB, I'm seeing a number of spectral analysis methods including FFT, Burg Method, Welch Method, Multitaper Method, and more - but I'm not sure of their specific applications.</p>
<p>Again, my goal is to clearly identify the major spectral peaks (formants) which correspond to the resonant frequencies of the Saxophone bore. The data was taken with a condenser microphone at a sample rate of $44100\textrm{ Hz}$, with the Saxophone bore exactly 1 meter from the microphone. Ultimately, I'm looking to create a plot similar to the image I have attached. </p>
<p>Please excuse my ignorance on this subject, this is my first time using MATLAB and I am not well-versed in digital signal processing. If anyone could help me with this, I would be extremely grateful.</p>
<p><img src="https://i.sstatic.net/kzhY0.png" alt="enter image description here"></p>
|
<p>The plot OP shared looks similar to log magnitude spectrum(black line) plotted with its envelope(red line). so from what OP says, OP need to plot log magnitude spectrum of recorded audio/signal. One thing to remember for this analysis is that same note should be played for entire duration of the signal to be analysed. if not, one should use proper windowing methods to ensure that signal remains stationary(in same note) for the entire duration of window under analysis. I am assuming that a note corresponds to a few particular frequencies. </p>
<p>that is the way if you want to plot/analyse a graph similar to the one shared. are you also looking for other methods for determining the peaks corresponding to a particular note at a time?</p>
| 507
|
spectral analysis
|
Minimum discernible frequency in power spectral analysis
|
https://dsp.stackexchange.com/questions/76662/minimum-discernible-frequency-in-power-spectral-analysis
|
<p>Say I have a signal of length <code>20s</code> that contains signal from various (unknown) biological sources, e.g. heartbeat (~0.2Hz), respiration (~1Hz), and possibly som very-low frequency oscillations (~0.05Hz). My sampling frequency is <code>fs=10Hz</code>. I conduct a periodogram PSD estimate (hanning window, mean-subtracted, <code>N_FFT=500</code>), yielding a spectrum with frequencies <code>0:0.02:5 Hz</code>.</p>
<p>Say I have several 20s recordings in two different settings and I want to conduct a statistical analysis of the spectral power in specific frequency bins or bands. Is there a lower limit to which frequencies I can use to statistically infer on biology?</p>
<p>For example, should I assume there to be a specific number of periods of frequency present in the window? E.g., if I say that at least 2 periods should be present in the window, the lowest detectable frequency is 0.1Hz. Or can I just as easily infer on the spectral power of frequencies as low as my resolution allows?</p>
|
<p>If the goal is to map out a <em>range</em> of low frequencies, then CWT is preferred over STFT, as it zooms logarithmically and will provide far more detail (<a href="https://overlordgolddragon.github.io/test-signals/" rel="nofollow noreferrer">examples</a>). If the goal is a few specific frequencies, then a targeted STFT will work (i.e. design windows for said frequencies rather than doing a linear sweep).</p>
<p>What remains is boundary effects: large windows demand data beyond the span of original input. The answer is padding, but no single perfect scheme: any method (including zero padding) is akin to imposing a <em>statistical prior</em>, i.e. assumption to best-fit the source process.</p>
<ul>
<li>I've done much work on this subject and can recommend <code>reflect</code> as the best general scheme, if nothing else is known. <a href="https://dsp.stackexchange.com/a/76236/50076">Related post</a>, <a href="https://github.com/kymatio/kymatio/discussions/752" rel="nofollow noreferrer">discussion</a></li>
<li>However, if the goal is instantaneous frequency/amplitude localization, zero might be favorable (see <a href="https://github.com/OverLordGoldDragon/ssqueezepy/blob/master/examples/extracting_ridges.py" rel="nofollow noreferrer">examples</a>).</li>
<li>For an advanced approach, see <a href="https://github.com/JosefineAtMath/BoundaryWavelets" rel="nofollow noreferrer">boundary wavelets</a>.</li>
</ul>
<p>Best results will integrate domain knowledge into the application - i.e., use the "best guess" for how the source behaves outside the measurement. If the source isn't sufficiently oscillatory to begin with, methods other than CWT/STFT may be favorable.</p>
| 508
|
spectral analysis
|
Will noise cancellation with playstation eye improve my spectral analysis?
|
https://dsp.stackexchange.com/questions/54487/will-noise-cancellation-with-playstation-eye-improve-my-spectral-analysis
|
<p>Im doing some beginners spectral analysis for a homemade reactive RGB LED project and was thinking of ways to improve the accuracy of the audio analysis. Im using a Playstation eye with a Raspberry Pi Zero W (limited performance) and ive read several articles to try to better my understanding. According to this <a href="http://www.reid.ai/2009/11/ps3-eye-4-channel-audio-tests-on-ubuntu.html" rel="nofollow noreferrer">article</a>, the playstation eye has 4 channels, where the 2nd and 3rd are reversed for noise cancellation.<br>
I assume that the 2nd channel is a reverse of the 1st and the 3rd of the 4th. Am i correct to assume that the way to go about implementing these extra channels would be to average the two sums of 1+2 and 3+4 ie. <code>voltage = (c1+c2+c3+c4)/2</code>? </p>
<p>Below is a script i use for testing, only the basics. Note that i use a sampling rate of 20khz instead of 44.1khz due to the limited cpu resources of the pi zero. I also use int16 as wiki states that the ps eye uses 16bit ints. Am i wrong not to use the default float? </p>
<p>The reason i want to improve it is that i havent recorded any frequencies over 7000, which seems a bit odd to me given the vast variety of music ive tested with. Im wondering if the playstation eye is not suitable for such precise analysis, or if my methods are wrong. Is it possible to improve this with noise cancellation, or will it worsen my results? Should i rather just use a single channel or sum channel 1+4? </p>
<pre><code>#!/usr/bin/env python3
import sounddevice as sd
import numpy as np
RATE = 20000
lowest = 200
highest = 0
h_m = 0
def print_sound(indata, frames, time, status):
global lowest, highest, h_m
if status:
print(status)
if frames == 0:
return
data = np.sum(indata, axis=1)
# data = data * np.hanning(len(data)) # smooth the FFT by windowing data
fft = abs(np.fft.rfft(data)) # calculate magnitudes from positive fft coefficients
freq = np.fft.rfftfreq(frames, 1.0/RATE) # calculate positive frequency bins
max = np.max(fft) # highest magnitude
freqPeak = freq[np.where(fft==max)[0][0]] # find frequency with highest magnitude
if freqPeak < lowest:
lowest = freqPeak
print("lowest: %d"%freqPeak)
if freqPeak > highest:
highest = freqPeak
print("highest: %d"%freqPeak)
if max > h_m:
h_m = max
print(max)
with sd.InputStream(dtype='int16', channels=4, samplerate=RATE, callback=print_sound):
while True:
response = input()
if response in ('', 'q', 'Q'):
break
</code></pre>
|
<p>As I read your prompt, it seems that the real question you’re asking is why you can’t record frequency peaks greater than 7kHz. The second question seems to be whether you can improve this performance through noise cancellation techniques.</p>
<p>If that’s the case, and if you’re right about the 2nd and 3rd channels being reversed, then I think your summation may introduce frequency filtering effects in your output and not necessarily true noise cancellation. This can either work for or against you. But this would depend on the orientation between the microphone and speakers, since in your link it appears that can affect the delays between the channels. Either way, I’d go with the single channel to simplify things.</p>
<p>And if you’ve only been recording music and not individual tones at greater than 7kHz, then it also makes sense that you’re not grabbing high frequency peaks since audio energy tends to be concentrated at lower frequencies. There’s a reason why telephones and AM radio cut off at around 5kHz. You might also be attenuated because of sinc roll off during the sampling process, although I suspect that’s not the big issue here.</p>
<p>Have you tried recording the entire spectrum, instead of just the peaks? And have you tried comparing that against the FFTs of the original audio files?</p>
| 509
|
spectral analysis
|
Wigner-Ville Distribution (WVD) vs STFT for Spectral Analysis
|
https://dsp.stackexchange.com/questions/86211/wigner-ville-distribution-wvd-vs-stft-for-spectral-analysis
|
<p>What are the Pros and Cons of using Wigner-Ville Distribution for spectral analysis of a signal vs taking the STFT. When is it appropriate to use one over the other and is WVD used in real-time implementations?</p>
<p><strong>Update:</strong> Since no answers were received for the first few days of this post, I tried posting the same question on ChatGPT and below is the response I got from it:</p>
<p><em>The WVD is a time-frequency representation that provides a high degree of resolution in both time and frequency. It is useful for analyzing signals that have non-stationary properties, such as chirps and transient signals. The WVD can also distinguish between closely spaced frequency components, and can also show the instantaneous frequency of a signal.</em></p>
<p><em>On the other hand, the STFT is a widely used method in signal processing, and is useful for analyzing signals that are stationary or have slow time-varying properties. It provides a high degree of frequency resolution, but its time resolution is limited. Unlike WVD, it is not able to distinguish between closely spaced frequency components, and it does not show the instantaneous frequency of a signal to the degree in which WVD can.</em></p>
<p><em>The major disadvantage of the WVD is the computational cost. It has a computational complexity of O(N^2) which makes it impractical for large data sizes.</em></p>
<p>Is the above response true or false?</p>
<p>I had originally added this as a possible answer, removed it due to objection from a member (Jdip) due to inaccuracy and adding it back as part of question as asked by a member (OverLordGoldDragon) in the comments. Hope posting ChatGPT responses is not against the rules or anything.</p>
<p>Peter K. also mentioned a <a href="https://ieeexplore.ieee.org/document/149998" rel="nofollow noreferrer">link</a> to his publication in the comments of the deleted answer which I am adding here as well.</p>
<p>Peter K. also mentioned in OverlordGoldDragon's answer that the WVD is not very much useful in the presence of noise, a separate question has been added <a href="https://dsp.stackexchange.com/questions/86297/comparison-of-wvd-and-stft-spectral-analysis-in-the-presence-of-noise">here </a> for discussion on that in case anyone is interested.</p>
|
<p>I preface this answer with that I know little about WVD and never worked with it, but do know time-frequency, and synchrosqueezing, which shares similarities. Part of my answer will be for SSQ.</p>
<h3>Re: ChatGPT</h3>
<blockquote>
<p>The WVD is a time-frequency representation that provides a high degree of resolution in both time and frequency.</p>
</blockquote>
<p>No, oversimplified</p>
<blockquote>
<p>It is useful for analyzing signals that have non-stationary properties, such as chirps and transient signals. The WVD can also distinguish between closely spaced frequency components, and can also show the instantaneous frequency of a signal.</p>
</blockquote>
<p>Yes</p>
<blockquote>
<p>On the other hand, the STFT is a widely used method in signal processing, and is useful for analyzing signals that are stationary or have slow time-varying properties.</p>
</blockquote>
<p>So is DFT, misses the point</p>
<blockquote>
<p>It provides a high degree of frequency resolution, but its time resolution is limited.</p>
</blockquote>
<p>Nonsense, the whole point is we can tune it</p>
<blockquote>
<p>Unlike WVD, it is not able to distinguish between closely spaced frequency components,</p>
</blockquote>
<p>No</p>
<blockquote>
<p>and it does not show the instantaneous frequency of a signal to the degree in which WVD can.</p>
</blockquote>
<p>Yes</p>
<blockquote>
<p>The major disadvantage of the WVD is the computational cost. It has a computational complexity of O(N^2) which makes it impractical for large data sizes.</p>
</blockquote>
<p>No</p>
<h3>Re: dorian111</h3>
<blockquote>
<p>When increasing the sampling length to improve the frequency domain resolution, the time domain resolution will deteriorate.</p>
</blockquote>
<p>I can't tell if this refers to WVD or STFT. For STFT or any localized time-frequency method, it's wrong - the sampling <em>rate</em>, not <em>duration</em>, affects time resolution. WVD appears to have a global temporal operator, so it may be true there.</p>
<blockquote>
<p>For WVD, it is generally believed that it is not limited to the uncertainty principle and can achieve the maximum mathematical accuracy of frequency domain resolution.</p>
</blockquote>
<p>No method completely escapes Heisenberg, but it's true we can achieve practically perfect localization for certain classes of signals.</p>
<blockquote>
<p>The general conclusion is that the accuracy of WVD is much higher than that of STFT.</p>
</blockquote>
<p>No. This isn't even true for synchrosqueezing, which significantly improves upon WVD. The worst case in SSQ vs STFT is close, I can't say SSQ is better, and certainly not "much better". But it is true that the <em>best</em> case for SSQ is far superior.</p>
<blockquote>
<p>The disadvantage is that the frequency spectrum will appear pseudo-frequency when there are multiple frequency signals in the data.</p>
</blockquote>
<p>Unsure what this means, WVD is time-frequency, there's no "frequency spectrum" in standard sense. It's true that introducing additional intrinsic modes worsens WVD, esp. with "quadratic interference" (that SSQ lacks).</p>
<blockquote>
<p>Compared with STFT, the calculation cost is much higher, and the performance is the difference between O(N2log(N)) and O(kNlog(N)). When the STFT sliding length is taken as the minimum limit of 1, k=N, and the performance of WVD is the same.</p>
</blockquote>
<p>Not necessarily. The compute burden depends on what we need WVD for, and whether we window. Of chief consideration is information, and how much we lose, which <a href="https://dsp.stackexchange.com/a/80920/50076">can be measured</a> - and conversely, how much we gain by computing the full WVD as opposed to a part of it. The original MATLAB synchrosqueezing toolbox used <code>n_fft=N</code>, with logic that DFT is length N, and which most will agree is completely unnecessary.</p>
<p>Without windowing, I imagine WVD is like a fancified Hilbert transform and struggles with <a href="https://dsp.stackexchange.com/a/83257/50076">more than one component</a> - see <a href="https://www.di.ens.fr/%7Emallat/papiers/WaveletTourChap1-2-3.pdf" rel="nofollow noreferrer">Figure 4.18</a> and below. Windowing, particularly with kernels which make WVD complex, enable tremendous optimization, similar to CWT. These optimizations are unrealized in most code... <a href="https://dsp.stackexchange.com/a/83495/50076">for now</a>.</p>
<blockquote>
<p><span class="math-container">$$X(\tau,f)=\int_{-\infty}^\infty x(t)w(t-\tau)e^{-j2\pi f t}dt $$</span></p>
</blockquote>
<p>This is a correct STFT formulation which is what most libraries implement, but I'd like to note that <a href="https://github.com/librosa/librosa/issues/1279" rel="nofollow noreferrer">it's bad</a>.</p>
<blockquote>
<p><span class="math-container">$$W_x(t,f)=\int_{-\infty}^{\infty}x(t+\frac\tau{2})x^{*}(t-\frac\tau{2})e^{-j2\pi f\tau}d\tau $$</span></p>
</blockquote>
<p>From this formula, something sticks out: <span class="math-container">$x(... \pm \tau)$</span>. This screams boundary effects - a major disadvantage compared to STFT.</p>
<hr>
<h3>Re: original question</h3>
<p>Two major advantages of the <em>spectrogram</em> (<code>abs(STFT)</code>) over WVD, or at least SSQ, is <em>stability</em>, and <em>sparsity</em>. SSQ, as a feature, is quite brittle to noise (in some ways, yet also more robust in other ways, see <a href="https://dsp.stackexchange.com/a/71399/50076">related</a>). Sparsity may come as a surprise, as SSQ claims that's <em>its</em> advantage over STFT - and it is - but the form of sparsity that matters a lot is <em>subsampleability</em>.</p>
<p>Note, <code>hop_size</code> is just the subsampling factor for STFT. We can hop in the spectrogram because there's high redundancy along time, and doing so loses little information. Not the case with SSQ, which generates rough and spiky time-frequency geometries - subsampling it a lot means losing a lot, and <em>not</em> subsampling likewise means keeping too many data points to be useful for machine learning, not because of data size, but correlated features prone to overfitting.</p>
<p>As I understand, WVD is more a measurement tool - it can be used to describe time-frequency characteristics of time-frequency kernels, e.g. wavelets. Though really I don't know where its applicability ends.</p>
<p>Lastly, third major advantage, STFT doesn't straight up invent signals:</p>
<img src="https://i.sstatic.net/VgN8X.png" width="800">
<p><a href="https://www.di.ens.fr/%7Emallat/papiers/WaveletTourChap1-2-3.pdf" rel="nofollow noreferrer">Figures 4.18 & 4.20, Wavelet Tour</a>. Left is plain WVD, such interference is a dealbreaker for most applications. Right is windowed WVD, which attenuates interferences, but "reduces the time-frequency resolution" (under Eq 4.156).</p>
| 510
|
spectral analysis
|
Which sample autocovariance sequence estimate should I use for spectral analysis?
|
https://dsp.stackexchange.com/questions/95335/which-sample-autocovariance-sequence-estimate-should-i-use-for-spectral-analysis
|
<p>Is there any reason to choose a biased autocovariance sequence estimate over an unbiased estimate for power spectral density analysis? Why would we want to introduce bias into our estimate of the power spectral density?</p>
|
<p><strong>TL;DR</strong>
While bias does decrease resolution in spectral estimators, there are two primary reasons the biased autocovariance sequence should be used. The first is that the unbiased ACS is indefinite, which can lead to negative spectral estimates, where as the biased ACS is positive definite (see <a href="https://dsp.stackexchange.com/questions/95329/why-does-the-unbiased-sample-autocorrelation-sequence-not-peak-at-lag-0/95330#95330">Why does the unbiased sample autocorrelation sequence not peak at lag 0?</a>). The other reason is that there is equivalence between the periodgoram and the correlogram for the biased ACS estimate, whereas the unbiased ACS estimate does not achieve this equivalence.</p>
<p><strong>Proof</strong></p>
<p>One reason why the biased autocovariance sequence estimate is important for spectral estimation is that it gives equality between the periodogram and correlogram, under the mild assumption that
<span class="math-container">\begin{equation}
\lim_{N\to\infty}\frac{1}{N}\sum_{k=-N}^{N}\left\lvert k\right\rvert c[k] = 0
\end{equation}</span>
First, we have two definitions of the power spectral density
<span class="math-container">\begin{align}
\phi(\omega) &= \sum_{k=-\infty}^{\infty}c[k]e^{-j\omega k} \\
\phi(\omega) &= \lim_{N\to\infty}E\left\{\frac{1}{N}\left\lvert\sum_{k=0}^{N-1}y[k]e^{-j\omega k}\right\rvert^{2}\right\}
\end{align}</span>
We assume <span class="math-container">$y$</span> is a sequence of zero-mean random variables. In the practical case, the sample mean is subtracted from the signal <span class="math-container">$y$</span>.</p>
<p>Proving this equality, we have
<span class="math-container">\begin{align}
\lim_{N\to\infty}E\left\{\frac{1}{N}\left\lvert\sum_{k=0}^{N-1}y[k]e^{-j\omega k}\right\rvert^{2}\right\} = \lim_{N\to\infty}\frac{1}{N}\sum_{k=0}^{N-1}\sum_{l=0}^{N-1}E\left\{y[k]y^{*}[l]\right\}e^{-j\omega\left(k-l\right)}
\end{align}</span>
Plugging in <span class="math-container">$\tau = k-l$</span>, we get
<span class="math-container">\begin{align}
\lim_{N\to\infty}\frac{1}{N}\sum_{k=0}^{N-1}\sum_{l=0}^{N-1}E\left\{y[k]y^{*}[l]\right\}e^{-j\omega\left(k-l\right)} &= \lim_{N\to\infty}\frac{1}{N}\sum_{\tau=-N+1}^{N-1}\sum_{k-l=\tau}E\left\{y^{*}[l]y[l+\tau]\right\}e^{-j\omega\tau} \\ &= \lim_{N\to\infty}\frac{1}{N}\sum_{\tau=-N+1}^{N-1}\left(N-\left\lvert\tau\right\rvert\right)c[\tau]e^{-j\omega\tau} \\ &= \sum_{\tau=-\infty}^{\infty}c[\tau]e^{-j\omega\tau}-\lim_{N\to\infty}\frac{1}{N}\sum_{\tau=-N+1}^{N-1}\left\lvert\tau\right\rvert c[\tau]e^{-j\omega\tau}
\end{align}</span>
Under the mild assumption previously mentioned, these two are equivalent, i.e. under the assumption the autocorrelation function decays sufficiently rapidly.</p>
<p>We then wish to show that the biased estimate meets this condition. In fact, we will show that the biased estimate meets an even stricter condition
<span class="math-container">\begin{equation}
\sum_{k=-N+1}^{N-1}\hat{c}[k] = 0; \; \forall \; N
\end{equation}</span>
To prove this is a stricter condition is straightforward. The original condition can be rewritten as
<span class="math-container">\begin{align}
\sum_{k=-N+1}^{N-1}\left\lvert k\right\rvert c[k] &= \sum_{k=1}^{N-1}kc[k] + \sum_{k=1}^{N-1}kc[-k] \\ &= \sum_{k=1}^{N-1}kc[k] + \sum_{k=1}^{N-1}kc^{*}[k] \\ &= 2\mathbb{R}\sum_{k=1}^{N-1}kc[k]
\end{align}</span>
For this sequence to approach 0 as <span class="math-container">$N\to\infty$</span>, the unweighted sum
<span class="math-container">\begin{equation}
2\mathbb{R}\sum_{k=1}^{N-1}c[k]
\end{equation}</span>
must approach 0 sufficiently rapidly such that it beats out the weighting by <span class="math-container">$k$</span>. Thus, if
<span class="math-container">\begin{equation}
\sum_{k=-N+1}^{N-1}c[k]=0 \; \forall N
\end{equation}</span>
this ACS meets the original constraint.</p>
<p>To prove the biased ACS satisfies the stricter condition, the following is recognized
<span class="math-container">\begin{align}
\sum_{k=-N+1}^{N-1}\hat{c}[k] &= \hat{c}[0] + \sum_{k=1}^{N-1}\hat{c}[k]+\hat{c}[-k] \\ &= \hat{c}[0] + \sum_{k=1}^{N-1}\hat{c}[k]+\hat{c}^{*}[k] \\ &= \hat{c}[0] + 2\text{Re}\sum_{k=1}^{N-1}\hat{c}[k]
\end{align}</span>
For the autocovariance sequence, the mean is 0, but this proof applies for non-zero means without loss of generality. This gives
<span class="math-container">\begin{equation}
0 = \frac{1}{N}\sum_{k=0}^{N-1}x[k] = \sum_{k=0}^{N-1}x[k]
\end{equation}</span>
What is interesting is that the term <span class="math-container">$\hat{c}[0]$</span> represents all of the "auto" terms, and the term <span class="math-container">$2\text{Re}\sum_{k=1}^{N-1}\hat{c}[k]$</span> represents all of the "cross" terms! Therefore, this can be rewritten as
<span class="math-container">\begin{align}
\hat{c}[0] + 2\text{Re}\sum_{k=1}^{N-1}\hat{c}[k] &= \frac{1}{N}\sum_{t=0}^{N-1}x[t]x^{*}[t] + 2\text{Re}\sum_{k=1}^{N-1}\frac{1}{N}\sum_{t=k+1}^{N-1}x^{*}[t]x[t+k] \\ &\propto \begin{bmatrix}x[1] & \cdots & x[N]\end{bmatrix}^{2} \\ &= \left(\sum_{t=1}^{N}x[t]\right)^{2} = 0
\end{align}</span>
Also notice that, for the unbiased estimate, the scaling <span class="math-container">$\frac{1}{N}$</span> would need to be changed to <span class="math-container">$\frac{1}{N-k}$</span>. Since this scaling is not constant, we would not get the same equality for the unbiased estimate. Therefore, we have the biased estimate of the ACS meets the condition for the periodogram to be equivalent to the correlogram, giving another bonus to why the biased estimate of the ACS should be used in spectral analysis.</p>
| 511
|
spectral analysis
|
How do I perform Spectral Analysis (FFT, Windowing, Detrending) on Sonic Anemometer Data?
|
https://dsp.stackexchange.com/questions/23900/how-do-i-perform-spectral-analysis-fft-windowing-detrending-on-sonic-anemome
|
<p>I am having trouble with all of the forums that I have looked at thus far.
I have one day's worth of Sonic Anemometer data. I want to see the spectral analysis of this data. When the anemometer data was being logged it appears that the frequency ranges between 31 and 32 hz. I am looking to analyze this data in thirty minute chunks and I would just like some guidance as to how to go about this. </p>
<p>I have already removed the bad data (stray data points with 98 m/s winds x.x). I am not sure if I am supposed to detrend the data then apply a Hamming window and take the Fourier transform, or if I'm supposed to just apply the Hamming window and then take the Fourier transform. </p>
<p>Also should I calculate the sampling frequency for each 30 minute interval since it appears to change or is that change insignificant? (or should I analyze the whole data set together?)</p>
<p>Sorry for the long post but I have been hacking away on forums all week and I cannot seem to find a consistent solution. </p>
|
<p>take fft of data</p>
<p>let's say data is <code>x = [ 3 4 5 6 6 9 7 ]</code></p>
<p>take fft</p>
<pre><code>y = fft(x);
shift so -fs/2 to fs/2
y = fftshift(x);
plot mag
plot(abs(y))
</code></pre>
<p>see link for help
<a href="https://youtu.be/z10ISekkOQs" rel="nofollow noreferrer">fft Help</a></p>
| 512
|
spectral analysis
|
Non-Linear, Non-Stationary spectral analysis methods! When and where?
|
https://dsp.stackexchange.com/questions/59797/non-linear-non-stationary-spectral-analysis-methods-when-and-where
|
<p>I have been reading about non-linear non-stationary signal analysis methods and it seems to do this type of analysis the go-to method is the Empirical Mode Decomposition (EMD), then Hilbert Transform (HT) to get instantaneous phase and frequency.</p>
<p>However I have been doing some analysis on a non-linear, non-stationary signal by splitting the time signal into intervals with a Hanning window and Fourier transforming each interval which I think might also be the idea behind the Short Time Fourier Transform (STFT), although I am not sure.</p>
<p>Can anyone tell me what is wrong with this method, and why the EMD / HT method is more advantageous? I have also seen wavelet analysis used in similar cases, is this something I should look into?</p>
|
<p>Non-linearity and non-stationarity are <strong>non-properties</strong>. Without more details, they do not say much about the methods that may perform well, and moreover the choice depends a lot on what you really do: analysis, feature extraction, enhancement, filtering, component separation, restoration? What follows are typical sets of tools you could use:</p>
<ul>
<li>Your moving-window-Fourier definitely has some STFT flavors, and there are many non-linear extensions: <a href="https://en.wikipedia.org/wiki/Bilinear_time%E2%80%93frequency_distribution" rel="nofollow noreferrer">bilinear or quadratic time–frequency distributions</a> (Wigner-Ville, Choi–Williams, Rihaczek)</li>
<li>Non-linearity may come from changing the classical energy preservation into other norms, like robust DFTs (under an <span class="math-container">$\ell_1$</span> norm minimization), see for instance <a href="https://doi.org/10.1109/tsp.2003.812739" rel="nofollow noreferrer">Robust <span class="math-container">$L$</span>-estimation based forms of signal transforms and time-frequency representations</a>, see also <a href="https://dsp.stackexchange.com/q/59022/15892">Using different algorithms/formulas to create spectrograms</a></li>
<li>Some methods also alleviate the hypothesis that signal formation is linear: variance-stabilizing transforms, cepstral analysis or <a href="https://en.wikipedia.org/wiki/Homomorphic_filtering" rel="nofollow noreferrer">homomorphic signal processing</a> (linear transforms applied in a <span class="math-container">$\log$</span> domain)</li>
<li>Generalizations of linear approaches in polynomial or rational fraction modeling, with a recent interest in <a href="https://doi.org/10.1109/tsp.2018.2890065" rel="nofollow noreferrer">Rational Optimization for Nonlinear Reconstruction with Approximate <span class="math-container">$\ell_0$</span> Penalization</a>.</li>
</ul>
<p>All of the above can be combined somehow, like EMD at different scales in <a href="https://dx.doi.org/10.1098%2Frspa.2014.0709" rel="nofollow noreferrer">Intrinsic multi-scale analysis: a multi-variate empirical mode decomposition framework</a>. EMD has many uses and <a href="https://en.wikipedia.org/wiki/Hilbert%E2%80%93Huang_transform#Limitations" rel="nofollow noreferrer">some known limitations at the same time</a>.</p>
| 513
|
spectral analysis
|
Spectral analysis of a cross correlation function
|
https://dsp.stackexchange.com/questions/68308/spectral-analysis-of-a-cross-correlation-function
|
<p>So it's widely known that the Cross Correlation of 2 signals helps us in figuring out the time delay in those signals by analyzing the peak of the correlation coefficient in the time domain.</p>
<p>For something I am working on, due to a short baseline in comparison to the wavelength of the signal in between two of my receivers, the cross correlation of the signals from my two sensors give a peak at zero which makes it hard for me to analyse them. </p>
<p>Which is what brings me to my original question i.e
What if I analyse the fft of this correlation coefficient array? Will I find anything hinting towards the time delay? </p>
<p>If not, is there anyway to figure out how I can find the time delay with such a short baseline (~50m) for a long wavelength (~20km) signal?
Thanks for the help. Any little help would be appreciated. </p>
|
<p>cross-correlation would determine the time-delay. but the scale of signals' length and the delay value should be considered precisely. I mean if you have two long signal of, for say, 100sec and you are looking for a time delay in an order of 1msec, you don't need to do cross-correlation over the whole 100sec. Instead, choose a proper window time (in msec order) of two signal and then perform the cross-correlation. </p>
| 514
|
spectral analysis
|
Spectral analysis with high selectivity but low number of points
|
https://dsp.stackexchange.com/questions/73360/spectral-analysis-with-high-selectivity-but-low-number-of-points
|
<p>I'm trying to create spectral plots similar to one below (it's from the <a href="https://www.analog.com/media/en/technical-documentation/data-sheets/AD1896.pdf" rel="nofollow noreferrer">datasheet</a> of an AD1986 sample rate converter, page 8):</p>
<p><a href="https://i.sstatic.net/3Zoev.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Zoev.png" alt="AD1896 spectral analysis" /></a></p>
<p>In the comments below the plot, it says "Wideband FFT plot, 16K points".</p>
<p>I'm particularly interested in having high spectral selectivity (if that's the right term), with the 1kHz peak being very narrow, with 'only' having 16K FFT points.</p>
<p>I'm doing my exercise on a 1000.621 Hz sine wave, quantized to 16 bits with a 44.1kHz sample rate.</p>
<p>This code to create all the plots below can be found <a href="https://gist.github.com/tomverbeure/725305fbf5c7fb56b9c70c81a4928e5b" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Step 1: straightforward FFT of 1s of samples, with and without window:</strong></p>
<p><a href="https://i.sstatic.net/8h41B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8h41B.png" alt="Single FFT, with and without window" /></a></p>
<p>There are 2 issues with this:</p>
<ul>
<li>there's a lot of variability in the noise</li>
<li>there's still a lot of spreading around the peak, despite using a Blackman window</li>
</ul>
<p><strong>Step 2: Average 100 FFTs over 100s of samples</strong></p>
<p><a href="https://i.sstatic.net/g5Xtr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5Xtr.png" alt="100 FFTs averaged" /></a></p>
<p>This averages the noise, and it overlays perfectly with the original FFT in terms of signal power.</p>
<p><strong>Step 3: Do a single FFT over 100s of samples</strong></p>
<p><a href="https://i.sstatic.net/3ucp7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3ucp7.png" alt="Single FFT of 100s of samples" /></a></p>
<p>Increasing the number of samples by 100, has reduced the width of the FFT bins.</p>
<p>Of course, the noise isn't averaged, so I'm back to square one on that one. What's more, the noise floor due to quantization will be lower than for step 2, due to the increased number of bins.</p>
<p><strong>Step 4: Merge 4410000 FFT bins of step 3 back to 44100 FFT bins</strong></p>
<p>I could average a bunch of these to reduce the noise, just like in step 2, but the AD1896 image says it's a 16K point FFT. Which means that they used a different method.</p>
<p>So I think that I should also be able to merge 100 FFT bins back to 1 bin, so that I get the combination of both a narrow peak and a noise floor that that is similar in value to the one in step 2.</p>
<p>Here's the result after adding the absolute values of each bucket of 100 FFTs bin into 1 FFT bins:</p>
<p><a href="https://i.sstatic.net/DY3Cg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DY3Cg.png" alt="Bins of large FFT merged" /></a></p>
<p>This is starting to look good, but the peak value is now well above 0dB. And the noise floor around ~-128dB is much higher than the one that was around ~-145dB of step 2.</p>
<p>What I really want is the result of step 2 with the same level of noise floor, but without the wide peak of step 3.</p>
<p>I have the feeling that adding the absolute values of each bin is not the right approach, but I don't know which would be the right approach, and why?</p>
|
<p>The best way to do ae this measurement: Use a sine wave generator that is phase locked to your data acquisition clock with a frequency that's an integer multiple of your sample rate divided by the FFT length. If you do this, the period of your sine wave becomes an integer number of samples, you don't get any spectral spreading and you don't need any windowing at all.</p>
<p>You combine different frequency bins by summing the energy of bins.</p>
| 515
|
spectral analysis
|
How precisely can power spectral analysis be replicated?
|
https://dsp.stackexchange.com/questions/96123/how-precisely-can-power-spectral-analysis-be-replicated
|
<p>I was tasked with comparing my own PSA algorithm's results with those of a different lab. The data which is being analyzed is NREM epochs of a sleep EEG. The match I have obtained is, to my eyes, very decent in terms of the overall trend.</p>
<p>PSA is calculated as follows: divide each <span class="math-container">$30$</span>-sec epoch into <span class="math-container">$5$</span>-sec subepochs; compute power within each subepoch; average the power of the subepochs to obtain mean power in the <span class="math-container">$30$</span>-sec epoch.</p>
<p>After doing that across all <span class="math-container">$K$</span> epochs and then averaging the results, this is what I obtain compared to the other lab (in power and in dB):</p>
<p><a href="https://i.sstatic.net/Kn7YuUPG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kn7YuUPG.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/tLnvw1yf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tLnvw1yf.png" alt="enter image description here" /></a></p>
<p>As the plots reveal, the agreement is quite decent. However, my superior is not satisfied with the agreement between the two analyses because the power spectrums of the epochs being averaged aren't that similar. For instance, the sum of power in the frequency range <span class="math-container">$0.4$</span> to <span class="math-container">$4$</span>, ordered by epoch, is:</p>
<ul>
<li>Epoch 1: <span class="math-container">$\text{myMethod} = 126.59, \text{otherLab'sMethod} = 206.72$</span></li>
<li>Epoch 2: <span class="math-container">$\text{myMethod} = 120.54, \text{otherLab'sMethod} = 122.26$</span></li>
<li>Epoch 3: <span class="math-container">$\text{myMethod} = 87.38, \text{otherLab'sMethod} = 108.05$</span></li>
</ul>
<p>and so on. However, I have validated my own algorithm in as many ways as I can imagine (compared it with Julia's DSP package algorithm and found an almost perfect match, Parseval theorem holds, etc.). I have completely ran out of ideas with regards to how to improve my own algorithm's match with the other lab's results, specially since <em>on average</em> the final results are essentially the same. I have two questions:</p>
<p><span class="math-container">$(1)$</span> What could the other lab be doing that would produce the difference between their result and mine? I know for a fact: <span class="math-container">$(a)$</span> they use a Hanning window, as I do; <span class="math-container">$(b)$</span> they use Barlett's method, as I do; <span class="math-container">$(c)$</span> we are filtering the signal in the same way before performing the PSA. So what else could cause this mismatch?</p>
<p><span class="math-container">$(2)$</span> Is it even reasonable to expect such perfect degree of similarity between my own method and that from a different lab, specially in the context where the <em>code</em> which the other lab uses is proprietary and hence unknown to any of us? In other words, given the good agreement in the final average, is it reasonable for my superiors to expect that I fine-tune my algorithm to the point that not only the final average agrees almost perfectly, but each epoch's spectrum agree almost perfectly?</p>
<p>Thanks in advanced.</p>
|
<blockquote>
<p>the code which the other lab uses is proprietary and hence unknown to any of us</p>
</blockquote>
<p>Hard to say. Spectral analysis has a lot of parameters you can tweak: window shape, overlap, hop size, smoothing, framing, FFT size, padding, etc. Could also be a non-standard way of doing things or just a plain old bug.</p>
<blockquote>
<p>Is it even reasonable to expect such perfect degree of similarity between my own method and that from a different lab,</p>
</blockquote>
<p>Generally yes. Reproducibility as a core element of any scientific work.</p>
<blockquote>
<p>the code which the other lab uses is proprietary and hence unknown to any of us</p>
</blockquote>
<p>That means you are being asked to do "reverse engineering" which can be legally or ethically problematic. It my a be a good idea to have conversation with your management on why exactly they try to deep dive into something proprietary and what they are planning to do with it.</p>
<p>Proprietary can also mean that they made modifications to the standard methods that they feel are advantageous. Could also mean that what they did is messy and not justifiable so they don't want to talk about it.</p>
<blockquote>
<p>is it reasonable for my superiors to expect that I fine-tune my algorithm to the point that not only the final average agrees almost perfectly, but each epoch's spectrum agree almost perfectly</p>
</blockquote>
<p>There are reverse engineering methods that can help here, but it would be good to understand the purpose of the work and the specific requirements. For starters: if you get a a different result: What criteria is used to determine which results is "better" or "correct"? Are you sure that the analysis from the other lab is actually "better" than yours. And if so, why?</p>
| 516
|
spectral analysis
|
Comparison of WVD vs STFT Spectral analysis in the presence of Noise
|
https://dsp.stackexchange.com/questions/86297/comparison-of-wvd-vs-stft-spectral-analysis-in-the-presence-of-noise
|
<p>This question is an extension to the question about WVD vs STFT originally posted <a href="https://dsp.stackexchange.com/questions/86211/wigner-ville-distribution-wvd-vs-stft-for-spectral-analysis/86287?noredirect=1#comment182690_86287">Here</a>. During the QA it was pointed out that the WVD only works for noiseless signals.</p>
<p>To test that out I created a simple chirp signal in MATLAB and compared WVD spectrograms at different SNRs.</p>
<p>Below is the time-domain signal on the left and the WVD corresponding WVD on the right for 25dB SNR:</p>
<p><a href="https://i.sstatic.net/6l3XY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6l3XY.png" alt="Time-Domain Chirp Signal and WVD at 25dB SNR" /></a></p>
<p>Below is for the 0dB SNR case:</p>
<p><a href="https://i.sstatic.net/N8vTD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N8vTD.png" alt="Time-Domain Chirp Signal and WVD at 0dB SNR" /></a></p>
<p>Even at 0dB SNR the presence of chirp is still visible in the WVD spectrogram, is MATLAB doing any other post-processing as well? and is the WVD really useless for time-frequency analysis of real-world signals?</p>
| 517
|
|
spectral analysis
|
Spectral Analysis of a Time Series with Missing Data Points
|
https://dsp.stackexchange.com/questions/22930/spectral-analysis-of-a-time-series-with-missing-data-points
|
<p>I use a PC to record time series of some physical property. The problem is that, for some reason, I did not record the time series as a whole, rather I record first segment, then second, third, etc. Each segment is 500,000 points, representing ~30 min. However, they are not strictly consecutive: between each segment and the other there will be a 'lag', or missing points, equal to a few seconds.</p>
<p>Since the number of missed points is too small compared to the total number of samples for each segment, can I simply concatenate the segments and treat them as a single time series? At the end I will be interested in doing FFTs, histograms, etc.</p>
|
<p>Given <span class="math-container">$ \left\{ x \left[ n \right] \right\}_{n \in M} $</span> where <span class="math-container">$ M $</span> is the set of indices given for the samples of <span class="math-container">$ x \left[ n \right] $</span>.</p>
<p>The trivial solution (Which it would be great to have a faster more efficient solution is what I'm looking for) would be:</p>
<p><span class="math-container">$$ \arg \min_{y} \frac{1}{2} \left\| \hat{F}^{T} y - x \right\|_{2}^{2} $$</span></p>
<p>Where <span class="math-container">$ \hat{F} $</span> is formed by subset of columns of the DFT Matrix <span class="math-container">$ F $</span> matching the given indices of the samples, <span class="math-container">$ x $</span> is the vector of the given samples and <span class="math-container">$ y $</span> is the vector of the estimated DFT of the full data of <span class="math-container">$ x \left[ n \right] $</span>.</p>
<p>The solution is then given by the Pseudo Inverse (Least Squares Solution):</p>
<p><span class="math-container">$$ y = { ( \hat{F} \hat{F}^{T} ) }^{-1} \hat{F} x $$</span></p>
<p>In practice, the matrix will be very poorly conditioned hence solution must be generated using the LS Solution using the SVD.</p>
<p>The code is available at my <a href="https://github.com/RoyiAvital/StackExchangeCodes" rel="nofollow noreferrer">StackExchange Codes Signal Processing Q17734 GitHub Repository</a> (Look at the <code>SignalProcessing\17734</code> folder).</p>
<p>Result of the code:</p>
<p><a href="https://i.sstatic.net/TF8Tp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TF8Tp.png" alt="enter image description here" /></a></p>
<p><strong>Remark</strong>: Borrowed from <a href="https://dsp.stackexchange.com/questions/17734">Estimate the Discrete Fourier Transform / Series of a Signal with Missing Samples</a>.</p>
| 518
|
spectral analysis
|
Amplitude ratio is larger than 0.5 on a RC-filter if RC = 1.7684e-03
|
https://dsp.stackexchange.com/questions/84469/amplitude-ratio-is-larger-than-0-5-on-a-rc-filter-if-rc-1-7684e-03
|
<p>I have two signals</p>
<pre><code>>> t = linspace(0, 10, 10000); % 0 to 10 seconds
>> u1 = 30*sin(2*pi*5*t); % Sine signal 1
>> u2 = 10*sin(2*pi*90*t); % Sine signal 2
>> u = u1 + u2; % Combine
>> spa(u, t); % Spectral analysis
>> grid on
</code></pre>
<p>And the spectral analysis shows these two signals <code>u1</code> and <code>u2</code></p>
<p><a href="https://i.sstatic.net/NUaLL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NUaLL.png" alt="enter image description here" /></a></p>
<p>Now I want to filter the signal <code>u</code> by using a <code>RC-filter</code></p>
<p><span class="math-container">$$G(s) = \frac{1}{RCs + 1}$$</span></p>
<p>I say that at <code>90 Hz</code>, I'm going to reduce the amplitude of the <code>u2</code> sine signal with 50%, also half. 90 Hz is <span class="math-container">$s = 2\pi90$</span> radians.</p>
<p>So I put in the <span class="math-container">$s$</span> inside <span class="math-container">$G(s)$</span></p>
<p><span class="math-container">$$G(2\pi90) = 0.5 = \frac{1}{RC2\pi90 + 1}$$</span></p>
<p>And now trying to solve <span class="math-container">$RC$</span></p>
<p><span class="math-container">$$RC = \frac{1-0.5}{0.5*2\pi90} = 1.7684*10^{-3}$$</span></p>
<p>Assume that we are using <span class="math-container">$R = 1 kOhm$</span> and <span class="math-container">$C = 1.7684 uF$</span>.</p>
<p>So I created my transfer function i MALTAB / Octave and plotted the bode diagram</p>
<pre><code>>> R = 1000;
>> C = 1.7684e-06;
>> G = tf(1, [R*C 1]) % Transfer function
G =
scalar structure containing the fields:
num = 1
den =
1.7684e-03 1.0000e+00
delay = 0
tfnum = 1
tfdash = ---------------
tfden = 0.0017684s + 1
type = TF
sampleTime = 0
>> bode(G, 0, 600); % From 0 rad/s to a little bit over 2*pi*90 rad/s. As you can see, at ` 2*pi*90 = 565.49 rad/s`, I have `-3 dB`, that's `0.7079` in magnitude.
</code></pre>
<p><a href="https://i.sstatic.net/I4Ksm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I4Ksm.png" alt="enter image description here" /></a></p>
<p>I do linear simulation of my transfer function and the spectral analysis. As you can see, why has not the amplitude of the peak of 90 Hz become half?</p>
<pre><code>>> y = lsim(G, u, t);
>> spa(y, t); % Spectral analysis
</code></pre>
<p><a href="https://i.sstatic.net/LR2bR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LR2bR.png" alt="enter image description here" /></a></p>
| 519
|
|
spectral analysis
|
Fourier Like Spectral Analysis with Uneven Intervals and Redesigned DFT Matrix
|
https://dsp.stackexchange.com/questions/54569/fourier-like-spectral-analysis-with-uneven-intervals-and-redesigned-dft-matrix
|
<p>I intended to use a discrete Fourier transform (DFT) on a time series sampled at uneven intervals. What I did was to calculate a DFT matrix where the elements are the values at the uneven locations like this</p>
<p><span class="math-container">$$
{\displaystyle {\begin{aligned}X_{k}&= \frac{1}{\sqrt{N}}\sum _{n=0}^{N-1}x_{n}\cdot e^{-{\frac {2\pi i}{time_{N-1} - time_0}}k \cdot time_n}\\&\end{aligned}}}
$$</span></p>
<p>where <span class="math-container">$time_n$</span> is the time location in the sequence and <span class="math-container">$time_{N-1} - time_0$</span> is the entire time span of all samples.</p>
<p>I realize that orthogonality of the basis vectors in the DFT matrix might get somewhat lost. Is anything more lost? Can I do something more to make the transformation more comparable to an ordinary DFT?</p>
<p>Here is an implementation of it you can try with your own data: <a href="https://github.com/bodavid" rel="noreferrer">David Jonsson</a>
(<code>bodavid</code>) - <a href="http://bodavid.github.io/Be10/Be10/public_html/index.html" rel="noreferrer">Fourier Transform for Unevenly Sampled Data</a>.<br />
Orthogonality between basis vectors are visualized in a colorized matrix.</p>
|
<p>I have derived the DFT for data whihc is sampled in a non uniform manner:</p>
<h1>The DFT Matrix for Non Uniform Time Samples Series</h1>
<h2>Problem Statement</h2>
<p>We have a signal <span class="math-container">$ x \left( t \right) $</span> defined on the interval <span class="math-container">$ \left[ {T}_{1}, {T}_{2} \right] $</span>.<br />
Assume we have <span class="math-container">$ N $</span> samples of it given by <span class="math-container">$ \left\{ x \left( {t}_{i} \right) \right\}_{i = 0}^{N - 1} $</span>. The samples time <span class="math-container">$ {t}_{i} $</span> is arbitrary and not necessarily uniform.</p>
<p>We're after the DFT of the samples <span class="math-container">$ \left\{ X \left[ k \right] \right\}_{k = 0}^{K - 1} $</span> as it was samples in a uniform manner (Implicitly means the samples in Frequency domain will be uniform as well).</p>
<h2>Deriving the Connection</h2>
<p>In the <a href="https://en.wikipedia.org/wiki/Discrete_Fourier_transform" rel="nofollow noreferrer">DFT Transform</a> the connection between time and frequency is given by:</p>
<p><span class="math-container">$$ x \left[ n \right] = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k}{N} n } \tag{1} $$</span></p>
<p>In <span class="math-container">$ \eqref{EqnIdft} $</span> we use <span class="math-container">$ n $</span> for modeling the sample index in time. We usually build samples in time as <span class="math-container">$ x \left[ n \right] = x \left( n {T}_{s} \right) $</span> where <span class="math-container">$ {T}_{s} $</span> is a uniform sampling interval.<br />
Hence we could write:</p>
<p><span class="math-container">$$ x \left( n {T}_{s} \right) = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k}{N {T}_{s}} n {T}_{s}} \tag{2} $$</span></p>
<p>In <span class="math-container">$ \eqref{EqnIdft2} $</span> we added explicit scaling of time. This is a known property of Fourier transform family which scales the domain in order to normalize the transform.</p>
<p>Now, there is nothing which blocks us from using arbitrary time:</p>
<p><span class="math-container">$$\begin{align*} \tag{3}
x \left( t \right) & = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k}{N {T}_{s}} t} && \text{} \\
& = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{j 2 \pi \frac{k {F}_{s}}{N} t} && \text{Since $ {F}_{s} = \frac{1}{{T}_{s}} $}
\end{align*}$$</span></p>
<p>As can be seen <span class="math-container">$ \eqref{EqnIdft3} $</span> makes sense as it goes through each element according to its frequency and sums to give the output at time <span class="math-container">$ t $</span>. We can go step farther and generalize it for cases we don't have uniform sampling frequency.<br />
The average sampling frequency is given by <span class="math-container">$ \bar{F}_{s} = \frac{N}{ {T}_{2} - {T}_{1} } $</span>. Let's define <span class="math-container">$ T = {T}_{2} - {T}_{1} $</span> and we'll get:</p>
<p><span class="math-container">$$ x \left( t \right) = \frac{1}{N} \sum_{k = 0}^{N - 1} X \left[ k \right] {e}^{ j 2 \pi k \frac{t}{T} } $$</span></p>
<p>Which is many ways resembles the <a href="https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform" rel="nofollow noreferrer">DTFT Transform</a> equation which does the same in the other direction, transforming uniform discrete samples in time domain to arbitrary frequency (Within a frequency interval) in Frequency Domain:</p>
<p><span class="math-container">$$\begin{align*} \tag{4}
X \left( f \right) & = \sum_{n = 0}^{N - 1} x \left[ n \right] {e}^{-j 2 \pi f {T}_{s} n } && \text{} \\
& = \sum_{n = 0}^{N - 1} x \left[ n \right] {e}^{-j 2 \pi \frac{f}{ {F}_{s} } n } && \text{Since $ {F}_{s} = \frac{1}{{T}_{s}} $}
\end{align*}$$</span></p>
<p>We see the same scaling, <span class="math-container">$ \frac{f}{ {F}_{s} } $</span> which scales the continuous <span class="math-container">$ f $</span> relative to the interval of frequencies <span class="math-container">$ {F}_{s} $</span> which is equivalent to <span class="math-container">$ \frac{t}{ T } $</span> which scales <span class="math-container">$ t $</span> relative to the time interval of the continuous signal.</p>
<h2>The Transform Matrix</h2>
<p>So, given the set of time indices <span class="math-container">$ {\left\{ {t}_{i} \right\}}_{i = 0}^{N - 1} $</span> the transformation matrix, from frequency domain to time domain, is given by:</p>
<p><span class="math-container">$$ D \in \mathbb{R}^{N \times K}, \; {D}_{i, k} = {e}^{ j 2 \pi k \frac{ {t}_{i} }{T} } $$</span></p>
<h2>The Model</h2>
<p>In vector form the model is:</p>
<p><span class="math-container">$$ x = D y $$</span></p>
<p>Where <span class="math-container">$ y \in \mathbb{C}^{K} $</span> is the vector of the frequency coefficients in uniform grid, <span class="math-container">$ x $</span> is the samples in time (Non Uniform, Or at least no assumption of uniformity) and <span class="math-container">$ D $</span> as defined above.<br />
Since in our model we're after <span class="math-container">$ y $</span> the answer is given by:</p>
<p><span class="math-container">$$ y = {D}^{\dagger} x $$</span></p>
<p>Where <span class="math-container">$ {D}^{\dagger} $</span> is the <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse" rel="nofollow noreferrer">Pseudo Inverse Matrix</a> of <span class="math-container">$ D $</span>.</p>
<h2>Implementation & Results</h2>
<p>The code is as following:</p>
<pre><code>subStreamNumberDefault = 79;
run('InitScript.m');
figureIdx = 0;
figureCounterSpec = '%04d';
generateFigures = ON;
%% Simulation Parameters
samplingFrequency = 101; %<! [Hz]
samplingInterval = 1 / samplingFrequency; %<! [Sec]
startTime = 1; %<! [Sec]
endTime = 4; %<! [Sec]
timeInterval = endTime - startTime; %<! [Sec]
numSamples = round(samplingFrequency * timeInterval);
numSamplesTT = round(1.2 * numSamples);
signalFreq = 2; %!< [Hz]
% The uniform time grid
vT = linspace(startTime, endTime, numSamples + 1);
vT(end) = [];
vT = vT(:);
% The non uniform time grid - Reconstruction
vTT = endTime * rand(numSamplesTT, 1);
vTT = sort(vTT, 'ascend');
% The non uniform time grid - DFT
vTD = linspace(startTime, endTime, (10 * numSamples) + 1);
vTD(end) = [];
vTD = vTD(sort(randperm(length(vTD), numSamples)));
vTD = vTD(:);
% The uniform frequency grid
vF = (samplingFrequency / 2) * linspace(-1, 1, numSamples + 1);
vF(end) = [];
vF = vF(:);
vK = [-floor(numSamples / 2):floor((numSamples - 1) / 2)];
vK = vK(:);
%% Generate Data
vX = cos(2 * pi * signalFreq * vT);
vFx = fftshift(fft(vX));
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = subplot(1, 2, 1);
hLineSeries = plot(vT, vX);
set(hLineSeries, 'LineWidth', lineWidthNormal);
set(get(hAxes, 'Title'), 'String', {['Reference Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Time Index']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Sample Value']}, ...
'FontSize', fontSizeTitle);
hAxes = subplot(1, 2, 2);
hStemObj = stem(vF, abs(vFx));
set(hStemObj, 'LineWidth', lineWidthNormal);
set(get(hAxes, 'Title'), 'String', {['DFT of the Reference Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Frequency [Hz]']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Magnitude']}, ...
'FontSize', fontSizeTitle);
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
%% Analysis - Reconstruction
mD = exp(1j * 2 * pi * (vTT / timeInterval) * vK.') / numSamples;
% Reconstruction according to the model
vY = real(mD * vFx);
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = axes();
set(hAxes, 'NextPlot', 'add');
hLineSeries = plot(vT, vX);
set(hLineSeries, 'LineWidth', lineWidthNormal);
hLineSeries = plot(vTT, vY);
set(hLineSeries, 'LineWidth', lineWidthNormal, 'LineStyle', ':', 'Marker', '*');
set(get(hAxes, 'Title'), 'String', {['Uniform Signal & Non Uniform Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Time Index']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Sample Value']}, ...
'FontSize', fontSizeTitle);
hLegend = ClickableLegend({['Uniform Signal'], ['Non Uniform Signal']});
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
%% Analysis - DFT of the Non Uniformly Sampled Data
vY = cos(2 * pi * signalFreq * vTD);
mD = exp(1j * 2 * pi * (vTD / timeInterval) * vK.') / numSamples;
vFy = pinv(mD) * vY;
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = axes();
set(hAxes, 'NextPlot', 'add');
hLineSeries = plot(vT, vX);
set(hLineSeries, 'LineWidth', lineWidthNormal);
hLineSeries = plot(vTD, vY);
set(hLineSeries, 'LineWidth', lineWidthNormal, 'LineStyle', ':', 'Marker', '*');
set(get(hAxes, 'Title'), 'String', {['Uniform Signal & Non Uniform Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Time Index']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Sample Value']}, ...
'FontSize', fontSizeTitle);
hLegend = ClickableLegend({['Uniform Signal'], ['Non Uniform Signal']});
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
figureIdx = figureIdx + 1;
hFigure = figure('Position', figPosLarge);
hAxes = axes();
set(hAxes, 'NextPlot', 'add');
hStemObj = stem(vF, abs([vFx, vFy]));
set(hStemObj, 'LineWidth', lineWidthNormal);
% hLineSeries = plot(vTT, vY);
% set(hLineSeries, 'LineWidth', lineWidthNormal, 'LineStyle', ':', 'Marker', '*');
set(get(hAxes, 'Title'), 'String', {['DFT of the Uniform Signal & Non Uniform Signal']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'XLabel'), 'String', {['Frequency [Hz]']}, ...
'FontSize', fontSizeTitle);
set(get(hAxes, 'YLabel'), 'String', {['Magnitude']}, ...
'FontSize', fontSizeTitle);
hLegend = ClickableLegend({['Uniform Signal'], ['Non Uniform Signal']});
if(generateFigures == ON)
saveas(hFigure,['Figure', num2str(figureIdx, figureCounterSpec), '.png']);
end
</code></pre>
<p>Results are:</p>
<p><img src="https://i.sstatic.net/ttz0k.png" alt="" />
<img src="https://i.sstatic.net/fiPAH.png" alt="" />
<img src="https://i.sstatic.net/4R1Is.png" alt="" />
<img src="https://i.sstatic.net/FVTTk.png" alt="" /></p>
<h2>Summary</h2>
<p>In this post we derived how to estimate the Uniform DFT of a Non Uniform Time Series by solving linear system of equations.</p>
<p>The full code is available on my <a href="https://github.com/RoyiAvital/StackExchangeCodes" rel="nofollow noreferrer">StackExchange Signal Processing Q32137 GitHub Repository</a> (Look at the <code>SignalProcessing\Q32137</code> folder).</p>
<h2>Remark: Why Do We Need to Apply <code>fftshift()</code> on the DFT of the Signal?</h2>
<p>Indeed in the Reconstruction part we use <code>fftshift()</code>. The shallow answer is easy, we also build the vector <code>vK</code> as symmetric around zero.<br />
But there is a deeper reason for that. In the DFT when we use uniform sampling in Frequency Domain and Time Domain <em>Magic</em> happens without us seeing it explicitly.</p>
<p>When we defined the term <span class="math-container">$ \frac{k}{ N {T}_{s} } n {T}_{s} $</span> we replaces <span class="math-container">$ n {T}_{s} $</span> with <span class="math-container">$ t $</span> hence we prevent the term <span class="math-container">$ {T}_{s} $</span> to cancel itself. Now setting <span class="math-container">$ {F}_{s} = \frac{1}{{T}_{s}} $</span> means that we multiply by <span class="math-container">$ k $</span> and we get frequencies which are out of the Nyquist Frequency.<br />
In most cases when we that happens the Modulo property of the exponent comes in and we get the correct negative value of the frequency in the range <span class="math-container">$ \left[ -\pi, \pi \right] $</span>. Yet when <span class="math-container">$ t $</span> is arbitrary we can think that <span class="math-container">$ {F}_{s} $</span> is changing per sample which means when we go farther than <span class="math-container">$ \pi $</span> the modulo doesn't bring us to the correct answer.</p>
<p>First, as intuition, always think the DFT is defined on the <span class="math-container">$ \left[ -\pi, \pi \right] $</span> interval and it is continuous. So as long as you work on this range things works as intended. This intuition can come from the Fourier Series and Discrete Fourier Series (DFS).</p>
<p>Let's try explaining it using a concrete example. Let's examine the exponent term from the derivation:</p>
<p><span class="math-container">$$ 2 \pi \frac{k}{N {T}_{s}} n {T}_{S} = 2 \pi \frac{k}{{F}_{S}} \frac{{F}_{s}}{N} n = 2 \pi \frac{k b}{{F}_{s}} n $$</span></p>
<p>Where <span class="math-container">$ b $</span> is the Bin Resolution in the Frequency domain. Now given the signal is:</p>
<p><span class="math-container">$$ x \left( t \right) = \cos \left( 2 \pi f t \right) \Rightarrow x \left( n {T}_{s} \right) = \cos \left( 2 \pi f {T}_{s} n \right) \Rightarrow x \left[ n \right] = \cos \left( 2 \pi \frac{f}{ {F}_{s} } n \right) $$</span></p>
<p>For <span class="math-container">$ {F}_{s} = 100 $</span> [Hz] and <span class="math-container">$ N = 100 $</span> (Which means <span class="math-container">$ b = 1 $</span>) we will have delta at <span class="math-container">$ k = 2 $</span> and <span class="math-container">$ k = 98 $</span>. For <span class="math-container">$ k = 98 $</span>:</p>
<p><span class="math-container">$$ 2 \pi \frac{98}{{F}_{s}} n $$</span></p>
<p>This is clearly above the Nyquist frequency (<span class="math-container">$ \frac{{F}_{s}}{2} $</span>) and only for <span class="math-container">$ {F}_{s} = 100 $</span> its modulo is <span class="math-container">$ -2 $</span> which is correct. But in the model above, since we have arbitrary <span class="math-container">$ t $</span> one could think we have changing <span class="math-container">$ {F}_{s} $</span> which means we don't get the correct value.</p>
<p>This means the actual equation should be:</p>
<p><span class="math-container">$$ x \left( t \right) = \frac{1}{N} \sum_{k = \left \lfloor - \frac{N}{2} \right \rfloor }^{ \left \lfloor \frac{N - 1}{2} \right \rfloor } X \left[ k \right] {e}^{ j 2 \pi k \frac{t}{T} } $$</span></p>
| 520
|
spectral analysis
|
STFT frequency domain processing destroys window shape, makes synthesis glitchy
|
https://dsp.stackexchange.com/questions/88402/stft-frequency-domain-processing-destroys-window-shape-makes-synthesis-glitchy
|
<p>I have a signal processing chain which goes:</p>
<ul>
<li>
<ol>
<li>Every 256 samples, take 512 samples of input buffer</li>
</ol>
</li>
<li>
<ol start="2">
<li>Apply a Hann window to the samples</li>
</ol>
</li>
<li>
<ol start="3">
<li>Copy the windowed samples into the real components of a complex array</li>
</ol>
</li>
<li>
<ol start="4">
<li>FFT the complex array, obtaining complex spectral analysis data</li>
</ol>
</li>
<li>
<ol start="5">
<li>Modify the spectral analysis data</li>
</ol>
</li>
<li>
<ol start="6">
<li>IFFT the modified spectral analysis into a complex array</li>
</ol>
</li>
<li>
<ol start="7">
<li>Overlap-write the real components of the complex array into an output buffer at 256-sample intervals.</li>
</ol>
</li>
</ul>
<p>If stage 5 does nothing, the input signal is perfectly reconstructed. The data being overlap-written into the output buffer at (7) has the Hann window shape applied at (2), so the overlaps merge smoothly.</p>
<p>If in (5) I try to apply a spectral delay, by adding in frequency components captured in previous windows, the Hann window shape is lost - the output data no longer tapers to 0 at both ends - and the overlaps do not merge smoothly. Horrible crackles ensue.</p>
<p>I can reapply a Hann window to the output data before overlap-writing, but then I get a sort of "warbling" effect due to the fact that I've now effectively windowed the data twice.</p>
<p>Ideally the stage 5 processing would leave the window shape intact - but it's unclear how to do this when changing the spectral analysis data. (Note that I'm already ensuring that conjugate symmetry is preserved). Is there a missing step? Do I need to apply some phase adjustment to the delayed frequency components?</p>
|
<p>Frequency domain processing is complicated primarily because of "time domain aliasing".</p>
<p>Most frequency domain manipulations can easily be expressed as a linear filtering operation, i.e.</p>
<p><span class="math-container">$$H[k] = \frac{Y[k]}{X[k]}$$</span></p>
<p>where <span class="math-container">$X[k]$</span> is the input spectrum, <span class="math-container">$Y[k]$</span> the resulting spectrum and <span class="math-container">$H[k]$</span> the transfer function of the "effective" filter. The impulse response of that filter is given by the inverse DFT of the transfer function, i.e.</p>
<p><span class="math-container">$$h[n] = \text{DFT}^{-1}\left\{ H[k] \right\}$$</span></p>
<p>Multiplication in the frequency domain is equivalent <strong>circular</strong> (not linear) convolution in the time domain. If <span class="math-container">$h[k]$</span> has non trivial time extension, than the linear convolution of signal and impulse response will longer than the DFT size and the overage will warp around to the beginning of the buffer. That creates the crackles and that's time domain aliasing.</p>
<p>If you do not change the spectrum, the effective transfer function becomes <span class="math-container">$H[k] = 1$</span> and the impulse response is <span class="math-container">$h[n] = \delta[n]$</span>. That has a length of 1, so no aliasing (and crackle) occur.</p>
<p>The standard solution for linear time invariant filter is zero-padding using Overlap-Add or Overlap-Save algorithm.</p>
<p>For time variant filters, things become much more complicated. In essence you need to control your spectral manipulation so that the equivalent impulse response stays causal and temporally compact. As far as I know, there is no "one size fits all" solution for this. There are a variety of methods (increase zero pad, decrease hop size, symmetric square root windows for both transforms, controlling the speed of "time variance" in the spectra domain, etc.) but the best choice depends a lot on the properties of your signals, the nature of your spectral manipulation and the requirements of your application.</p>
| 521
|
spectral analysis
|
Clarification concerning power spectral density
|
https://dsp.stackexchange.com/questions/43155/clarification-concerning-power-spectral-density
|
<p>Many books in signal processing, e.g. Papoulis [1], define power spectral density (PSD) as:</p>
<p>$$S(\omega)=\sum_{k=-\infty}^{\infty}R_{xx}(k)e^{-j\omega k}$$</p>
<p>Which is the fourier transform of the correlation function:
$$R_{xx}(\tau)=E[x(t)x(t-\tau)]$$</p>
<p>However, some authors in the context of time series analysis, for example Jenkins [2] define it as:</p>
<p>$$\Gamma_{xx}(\omega)=\sum_{k=-\infty}^{\infty}\gamma_{xx}(k)e^{-j\omega k}$$</p>
<p>Which is the fourier transform of the covariance function:</p>
<p>$$\gamma_{xx}(\tau)=\hbox{Cov}[x(t)x(t-\tau)]=E[x(t)x(t-\tau)]-E[x(t)]E[x(t-\tau)]$$</p>
<p>As I understand, the Wiener-Khinchin theorem considers $R_{xx}(\tau)$</p>
<p>Can someone clarify why $\Gamma_{xx}(\omega)$ is a valid definition of PSD.</p>
<p>[1] Papoulis, A. (1965). Probability, random variables, and stochastic processes.</p>
<p>[2] Jenkins,G. Watts,D. (1968), Spectral Analysis and Its Applications</p>
|
<p>I haven't seen Jenkins' context but for <em>zero mean</em> WSS random processes the <em>autocorrelation function</em> and the <em>auto covariance function</em> will the the same, hence you can use either of the definitions when the mean is zero.</p>
<p>For nonzero mean processes, see the relation between autocorrelation and autocovariance functions in p.321 of Papoulis 3ed. (eq. 10.125) , which is
$$S(\omega) = S^c(\omega) + 2\pi \eta^2\delta(\omega)$$</p>
<p>Where $S(\omega)$ is the aurocorrelation function and $S^c(\omega)$ is the autocovariance function and $\eta$ is the mean of the WSS random process. </p>
<p>Note that the autocovariance definition will always treat the random process by subtracting its mean.</p>
| 522
|
spectral analysis
|
Discrete Fourier transform of a finite length signal which saturates at non-zero value
|
https://dsp.stackexchange.com/questions/72120/discrete-fourier-transform-of-a-finite-length-signal-which-saturates-at-non-zero
|
<p>I am performing spectral analysis of a finite length signal that saturates to a non-zero value. The signal (<span class="math-container">$s(t)$</span>) can, practically, be write as
<span class="math-container">$s(t) = f(t) \big(1-H(t-t_0)\big) $</span>, where <span class="math-container">$t_0$</span> is the time at which the signal ends and <span class="math-container">$H(t)$</span> is the Heaviside step function. The sudden step from a non-zero value to zero at <span class="math-container">$t = t_0$</span> induces artifacts in the analysis.</p>
<p>I am looking for advice/literature on how to deal with signals of this type.</p>
<p>Question: How does one deal with artifacts cause by signals terminating at a non-zero value when
performing spectral analysis.</p>
|
<p>You can & should window your signal. Choice of window depends on the specific requirements of your application. Wikipedia has a really good overview on properties and trade-offs: <a href="https://en.wikipedia.org/wiki/Window_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Window_function</a></p>
<p>Keep in mind that you already ARE windowing: your finite length window is simply an infinite length signal multiplied with a rectangular window.</p>
| 523
|
spectral analysis
|
Need help understanding FFT for timeseries analysis?
|
https://dsp.stackexchange.com/questions/93411/need-help-understanding-fft-for-timeseries-analysis
|
<p>I am studying the seismo-ionospheric anomalies associated with the Japan earthquake using total electron content derived from a GPS signal. I am trying to detect acoustic-gravity waves in the signal caused by the earthquake if possible.</p>
<p>I am trying to perform spectral analysis for this task using matplotlib's built-in functions. So far, I have written the following code</p>
<pre><code>fig, [ax1, ax2, ax3] = plt.subplots(nrows=3)
fig.tight_layout()
(spec, freq, ln) = ax1.magnitude_spectrum(hourly_usud["TEC"], scale="dB", )
ax1.set_title("Time and Frequency Domain TEC")
ax2.phase_spectrum(hourly_usud["TEC"]);
ax3.plot(hourly_usud["TEC"])
ax3.set_xticklabels(ax3.get_xticklabels(), rotation = 45);
ax3.xaxis_date()
</code></pre>
<p>I've got the following figure, but I don't know what to do next, how to interpret that graph, or how to extract meaningful insights from it.</p>
<p><a href="https://i.sstatic.net/PNOvJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PNOvJ.png" alt="enter image description here" /></a></p>
|
<p>I think you are approaching this backwards. In general I would recommend</p>
<ol>
<li>Start with a mathematical model of your system</li>
<li>Analyze that model and identify certain properties or behaviors that might be identifiable in that data</li>
<li>Create some hypothesis around these properties</li>
<li>Pick the analysis method that's most suitable to look for the properties</li>
<li>Grab your data and go to town.</li>
</ol>
<p>Applying a random analysis method and staring at it can be occasionally useful, but if you don't know what to look for the likelihood of finding anything useful is low.</p>
| 524
|
spectral analysis
|
Cepstrum analysis - Cut off frequencies displayed
|
https://dsp.stackexchange.com/questions/38837/cepstrum-analysis-cut-off-frequencies-displayed
|
<p>My inquiry is regarding the so called <a href="https://en.wikipedia.org/wiki/Cepstrum" rel="nofollow noreferrer">cepstrum</a> analysis.</p>
<p>By conducting some experiments, i have real time data at my disposal. The acquisition rate is 1600Hz.</p>
<p>I wanted to try cepstrum analysis just out of curiosity to see how it performs in comparison with trivial spectral analysis.</p>
<p>So what i do with the acquired data:</p>
<ol>
<li>Use a lowpass filter with a cut-off frequency of 340 Hz for noise attenutation.</li>
<li>Fourier transform the filtered data, removing the DC component by subtracting the mean value from the filtered data.</li>
<li>Take the log of the magnitude of the FFT.</li>
<li>Apply IFFT on the resulting vector to acquire the real cepstrum.</li>
</ol>
<p>I notice that, despite the filtering that has taken place at step 1, the cepstrum plot displays activity beyond the cut-off frequencies. So my question is this: Is this supposed to happen? To me it seems that this is not normal. Does anyone have any experience with that?</p>
<p>Edit: Plot image added for clarification. Frequencies of interest lie on the right of the red line displayed</p>
<p><a href="https://i.sstatic.net/UlEUd.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UlEUd.jpg" alt="Cepstrum vs Quefrency plot"></a></p>
|
<p>Presence of frequencies above cutoff in Cepstrum could be because of - </p>
<ul>
<li>Is lowpass filter used a perfect lowpass filter(does it removes all the frequencies above cutoff or just attenuates them) ?</li>
<li>If lowpass filter is not perfect, what are the relative magnitudes of high frequencies(above cutoff) with respect to low frequencies. if high frequencies are relatively dominant then they will be visible in cepstrum.</li>
</ul>
<p>also
"Fourier transform the filtered data, removing the DC component by subtracting the mean value from the filtered data."</p>
<p>After fourier transform subtracting mean should not be the way to remove DC.</p>
| 525
|
spectral analysis
|
what is meant by biased and unbiased Autocovariance sequence estimate?
|
https://dsp.stackexchange.com/questions/48407/what-is-meant-by-biased-and-unbiased-autocovariance-sequence-estimate
|
<p>In nonparametric spectral estimation method what is meant by biased and unbiased Autocovariance sequence estimate and why most commonly biased one is used?
I have the answer to the second question from Peter Stoica (SPECTRAL ANALYSIS OF SIGNALS)but not getting what he wants to say. </p>
|
<p>In spectral estimation, one estimates "spectra" and other values. And one is interested in knowing whether the estimate $\hat{x}$ converge to the "true" but unknown value $x$. It is unknown, for instance because one has only access to a finite quantity $N$ of samples to compute with, a limited number of realization, because ergodicity is not assured, etc. So we want to know whether $\hat{x}_N\to x$ as $N\to \infty$. The convergence can be thought in different ways, and the most common is the mean square sense: does</p>
<p>$$\lim_{N\to \infty} E(|\hat{x}_N-x|^2)=0$$
where $E$ is the expectation? This can be really complicated to prove in practice. Thus, often people are satisfied when simpler conditions are met. And there are two necessary conditions for the above: </p>
<p>$$\lim_{N\to \infty} E(\hat{x}_N)=x$$</p>
<p>$$\lim_{N\to \infty} E(|\hat{x}_N|^2)=0$$</p>
<p>The first is the asymptotic unbiasedness, the second the vanishing variance. Together, they define consistent estimators. So, a biased autocorrelation estimate won't converge to the true autocorrelation. And derived estimators (like periodograms) are unlikely to converge as well.</p>
| 526
|
spectral analysis
|
Indian Space Research Organization - Extent of spectral spread 1/3rd Octave Analysis
|
https://dsp.stackexchange.com/questions/74666/indian-space-research-organization-extent-of-spectral-spread-1-3rd-octave-anal
|
<p>I am a scientist at Indian Space Research Organization (ISRO) and will be representing my team. We work on Telemetry and Data Handling units for our Launch Vehicles.</p>
<p>We are preparing for our upcoming SSLV Launch (Small Satellite Launch Vehicle) and need some information to confirm if we are at par with the industry, or if we need to switch to better available options.</p>
<p>It would be really helpful if you share the name of the company which excel in software or equipment for 1/3rd octave analysis of acoustics signals, that you know of or have used and can vouch for best accuracy.
Please mention what is:</p>
<ul>
<li><p>Order of band pass filter</p>
</li>
<li><p>Type of filter (Elliptic/butterworth etc.)</p>
</li>
<li><p>Order of Low pass filter (If one is being used in the design), type of low pass filter, its stop band attenuation and passband ripple.</p>
</li>
</ul>
<p>Your help is duly appreciated.
Warm regards from our entire team at ISRO.</p>
<hr />
<p>Rekhanshi Varma
Scientist/Engineer 'SC'
Vikram Sarabhai Space Center
Indian Space Research Organization (ISRO)
Office Ph: 04712564741 | Country Code: +91</p>
|
<p>That's not really a signal processing question, but here are are some pointers.</p>
<p>3rd octave filters are well defined in ANSI S1.11-2004 and any decent commercial implementation should be fully compliant to the standard. The implementation is rather simple, so compliance isn't much of a problem. It's easy enough to do it yourself and their are ready-made packages in analysis products like Matlab <a href="https://www.mathworks.com/help/audio/ug/octave-band-and-fractional-octave-band-filters.html" rel="nofollow noreferrer">https://www.mathworks.com/help/audio/ug/octave-band-and-fractional-octave-band-filters.html</a> or measurement products like B&K <a href="https://www.bksv.com/en/instruments/handheld/sound-level-meter-apps/bz-7132-and-7131-frequency-analysis" rel="nofollow noreferrer">https://www.bksv.com/en/instruments/handheld/sound-level-meter-apps/bz-7132-and-7131-frequency-analysis</a> (no product endorsement intended, but these are reputable names).</p>
<p>One thing to keep in mind: In most real world acoustic problems, the difficult part is NOT the analysis but actually making the measurement and/or data capture. Where and how to place what type of microphones, when and how to capture with what signal, making sure that the signals are neither overdriven nor overly noisy, good signal to noise ratio at all frequencies of interest, good reproducibility, etc.. This requires fare more experience and expertise than the data analysis, so if you want to spend extra money, I recommend you spend it there.</p>
<p>Sound fields are very complicated 3-dimensional phenomena and a microphone only samples a single point in space. Moving a microphone by a few centimeters often makes much more of a difference than any analysis parameter.</p>
| 527
|
spectral analysis
|
DCT - Measures of energy compaction gain achieved using DCT over FFT
|
https://dsp.stackexchange.com/questions/87162/dct-measures-of-energy-compaction-gain-achieved-using-dct-over-fft
|
<p>The discrete cosine transform(DCT) is a popular choice for spectral analysis in audio, video, image compression algorithms. This is primarily due its efficient "spectral compaction" property in comparison with Fast Fourier Transform(FFT). Is there a way to measure approximate "spectral compaction-gain" (if its appropriate to term) - say "X" times - achieved by using DCT on data instead of FFT?</p>
|
<p>In <em>Discrete-Time Signal Processing</em> by <em>Oppenheim</em>, <strong>chapter 8.5</strong>, there is a quantification of mean squared error by setting coefficients to 0 for both DCT and DFT:</p>
<p><a href="https://i.sstatic.net/UumIR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UumIR.png" alt="enter image description here" /></a></p>
<p>From this, you can of course calculate the percentages as you see fit. This particular analysis was done for signal:</p>
<p><span class="math-container">$$x[n] = a^n\cos(\omega_0n + \phi)$$</span></p>
<p>with following transforms:</p>
<p><a href="https://i.sstatic.net/q8mxX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q8mxX.png" alt="enter image description here" /></a></p>
<hr />
<p>If you wish to play with this example here's the code. It gives both the MSE and the amount of energy preserved after reconstruction.</p>
<p><a href="https://i.sstatic.net/EW8Ao.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EW8Ao.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/hZVSs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hZVSs.png" alt="enter image description here" /></a></p>
<pre><code>%% Signal to analyse
N = 32; % Must be even!
a = 0.9;
w0 = 0.1*pi;
n = 1:N;
x = a.^n.*cos(w0*n);
plot(n, x)
%% Calculate transforms
Xdft = fft(x);
Xdct = dct(x);
%% Plot
figure(1)
subplot(3,1,1)
stem(0:N/2, real(Xdft(1:N/2+1)))
title("Real part of DFT")
subplot(3,1,2)
stem(0:N/2, imag(Xdft(1:N/2+1)))
title("Imaginary part of DFT")
subplot(3,1,3)
stem(0:N-1, Xdct)
title("DCT")
%% Compact and measure
mseDFT = []; % Mean Squared Error
mseDCT = [];
energyDFT = []; % Energy preserved
energyDCT = [];
%In case of DFT we must zero 1,3,5,... bins starting from Nyquist
for m=1:N/2
xRec = dftReconstructCompact(Xdft, m);
mseDFT(m) = mean(abs(x-xRec).^2);
energyDFT(m) = norm(xRec) / norm(x);
end
for m=1:N
xRec = dctReconstructCompact(Xdct, m);
mseDCT(m) = mean(abs(x-xRec).^2);
energyDCT(m) = norm(xRec) / norm(x);
end
%% Plot the errors
figure(2)
plot(1:N, mseDCT, '-x')
hold on
plot(1:2:N, mseDFT, '-o')
grid on
legend({"DCT", "DFT"})
title("MSE between original and truncated")
figure(3)
plot(1:N, energyDCT, '-x')
hold on
plot(1:2:N, energyDFT, '-o')
grid on
legend({"DCT", "DFT"})
title("Energy Preserved after truncation")
%% Helper functions
function xRec = dftReconstructCompact(X, m)
% Zero-out 2*m-1 points from the DFT (starting from Nyquist bin and move
% left and right from there) then reconstruct the time domain signal.
Nyq = length(X)/2+1;
X(Nyq-m+1:Nyq+m-1) = 0;
xRec = ifft(X);
end
function xRec = dctReconstructCompact(X, m)
% Starting from the highest bin, zero out `m` values and reconstruct
% the time domain signal.
N = length(X);
X(N-m+1:N) = 0;
xRec = idct(X);
end
</code></pre>
| 528
|
spectral analysis
|
Whats a good grad level audio DSP textbook?
|
https://dsp.stackexchange.com/questions/95762/whats-a-good-grad-level-audio-dsp-textbook
|
<p>Whats a good rigorous grad level textbook on DSP focused on audio/spectral analysis? Coming from a half proof based half applied math/physics background; have done several research projects where I worked with spectra and did audio analysis but never really understood all the details I always see in posts on this site.</p>
|
<p>There aren't very many graduate-level DSP books that are specifically about audio. But there are important DSP books that you should have for graduate-level work in DSP, whether it's audio or some other signal class.</p>
<p>This would include, what we used to call <em>"scripture"</em>, <a href="https://rads.stackoverflow.com/amzn/click/com/0131988425" rel="nofollow noreferrer" rel="nofollow noreferrer">Oppenheim & Schafer</a>. Another old DSP book, that was the first to have sorta audio-oriented examples and student problems is <a href="https://eceweb1.rutgers.edu/%7Eorfanidi/intro2sp/" rel="nofollow noreferrer">Orfanidis</a>.</p>
<p>There <strong>are</strong> some textbooks on DSP and speech processing, an application space that shares a lot in common with audio. The only textbook I have (that I know of) is <a href="https://rads.stackoverflow.com/amzn/click/com/0132136031" rel="nofollow noreferrer" rel="nofollow noreferrer">Rabiner & Schafer</a>.</p>
<p>And, lastly, a good communications textbook can be useful for understanding frequency domain and spectral analysis of any signal class. <a href="https://rads.stackoverflow.com/amzn/click/com/0073380407" rel="nofollow noreferrer" rel="nofollow noreferrer">Carlson</a> And, if you need undergraduate underpinnings probably <a href="https://rads.stackoverflow.com/amzn/click/com/0138147574" rel="nofollow noreferrer" rel="nofollow noreferrer">Oppenheim & Willsky</a> would be good, but I suspect that the OP might be further along than this book. But it's a good reference along with O&S above.</p>
<p>These are all quite old textbooks. There are versions of O&S that are a half-century old. But they got the rigorous math and not all audio books have that.</p>
<p>Now, I do have three textbooks with the word "audio" on the cover and DSP with pretty rigorous math:</p>
<ul>
<li><a href="https://rads.stackoverflow.com/amzn/click/com/0805316841" rel="nofollow noreferrer" rel="nofollow noreferrer">Steiglitz</a></li>
<li><a href="https://rads.stackoverflow.com/amzn/click/com/0470997850" rel="nofollow noreferrer" rel="nofollow noreferrer">Zölzer</a></li>
<li><a href="https://rads.stackoverflow.com/amzn/click/com/1466560282" rel="nofollow noreferrer" rel="nofollow noreferrer">Reiss & McPherson</a></li>
<li>There is this book that I don't have (yet): <a href="https://rads.stackoverflow.com/amzn/click/com/1138591939" rel="nofollow noreferrer" rel="nofollow noreferrer">Pirkle</a></li>
</ul>
<p>These are all senior or graduate-level Electrical Engineering books. (Of course, any decent mathematician or physicist would have the math chops for these books.)</p>
<p>Now there are a few decently technical books, about digital music synthesis or computer music and audio, but maybe less mathy. Like <a href="https://rads.stackoverflow.com/amzn/click/com/0071663460" rel="nofollow noreferrer" rel="nofollow noreferrer">Pohlmann</a> or <a href="https://rads.stackoverflow.com/amzn/click/com/0240521056" rel="nofollow noreferrer" rel="nofollow noreferrer">Russ</a>.</p>
<p>That's about all I can think of at the moment.</p>
<p>About your emphasis in <strong>spectral analysis</strong>, they're all about <em><strong>both</strong></em> time and frequency-domain representations of audio signals. It's not just about one or the other. The two domains go hand-in-hand.</p>
<p>Now, if this is about some really specific spectral analysis of audio or acoustic signals, like analyzing the spectra of signals and yanking parameters out of such spectra, that's so parochial that you're really in the published lit in journals about it. I dunno. All this depends more specifically what you're look for.</p>
| 529
|
spectral analysis
|
Weak sense stationarity sufficient for multitaper spectrum analysis?
|
https://dsp.stackexchange.com/questions/19559/weak-sense-stationarity-sufficient-for-multitaper-spectrum-analysis
|
<p>This is a general question on multitaper spectral analysis of signals and stationarity</p>
<p>I work with time series which are non stationary. In this regard I have two questions</p>
<ol>
<li><p>Is weak sense stationarity of second order sufficient to apply multitaper spectrum to compute valid estimates from contiguous time series objects or strick sense stationarity is required. </p></li>
<li><p>Multitaper works with signals that are periodic, quasi periodic, nearly periodic, and semi periodic signals however I find hard to diferenciate between periodic and the rest as to be able to see if the series I am working with belong to any of the mentioned groups. </p></li>
</ol>
<p>Your help will be welcomed</p>
|
<p>The answers to the questions are </p>
<ol>
<li>Yes WSS is sufficient. </li>
<li><ul>
<li><strong>Quasiperiodic</strong> </li>
</ul>
<p><img src="https://i.sstatic.net/SIYxV.jpg" alt="enter image description here"></p>
<ul>
<li><strong>Semiperiodic</strong></li>
</ul>
<p><img src="https://i.sstatic.net/9ROKX.png" alt="enter image description here"></p></li>
</ol>
| 530
|
spectral analysis
|
What math should I study to really understand signal processing?
|
https://dsp.stackexchange.com/questions/91794/what-math-should-i-study-to-really-understand-signal-processing
|
<p>I am reading an elementary book on signal processing - "<a href="https://books.google.co.in/books/about/Signals_Systems.html?id=LwQqAQAAMAAJ&redir_esc=y" rel="nofollow noreferrer">Signals and Systems</a>".</p>
<p>It never struck until recently, the math involving signal processing seemingly has a lot more depth than the book presents. I always suspected some elementary linear algebra and mathematical analysis. But now, I am certain it entails functional analysis which is too opaque for me. Purportedly, the book shied away from gifting any clue.</p>
<p>I am afraid this question could become obscure very soon. I would love to receive as much words as is courtesy of the answerer. <strong>Please validate my understanding below.</strong></p>
<blockquote>
<p>Systems (operations thereof) are believed to be <a href="https://en.wikipedia.org/wiki/Function_space" rel="nofollow noreferrer">function spaces</a>. A function of this space could represent system action, say T(). Considering the LTI nature, these function spaces are linear, will have some "characteristics" (eigen). T() is a linear map. Due to causal/real nature, T() is a real <a href="https://en.wikipedia.org/wiki/Hermitian_matrix" rel="nofollow noreferrer">hermitian</a> map. Signals are modelled as vectors. Harmonics (comp exp) are <a href="https://en.wikipedia.org/wiki/Eigenfunction" rel="nofollow noreferrer">eigenfunctions</a> of these spaces, and thus eigenvectors of T(). Also, a set of eigenvalues (collectively called <a href="https://en.wikipedia.org/wiki/Spectrum_(functional_analysis)" rel="nofollow noreferrer">spectrum</a>) are associated to each system action. Thus, spectral analysis exploits eigen properties of LTI function spaces. Signal-energy represents the <a href="https://en.wikipedia.org/wiki/Norm" rel="nofollow noreferrer">norm</a> functional of this system space. A <a href="https://en.wikipedia.org/wiki/Transfer_function" rel="nofollow noreferrer">transfer function</a> is a spectral realization of T(). Earlier methods of system analysis pertaining <a href="https://en.wikipedia.org/wiki/State-space_representation" rel="nofollow noreferrer">operational calculus</a> have been superseded by spectral analysis (for LTIs) and <a href="https://en.wikipedia.org/wiki/State-space_representation" rel="nofollow noreferrer">state space analysis</a>. In OC, LTI functions along with convolution and addition form an algebraic group.</p>
</blockquote>
<p>Typing it out helped. I still can't know if its correct. I would be grateful to receive feedback on it.</p>
<p><a href="https://math.stackexchange.com/questions/491283/will-spectral-analysis-help-me-understand-digital-signal-processing-better">Relevant Math.SE post.</a></p>
|
<p>The entire series of books titled <em>Signal Processing</em> and authored by Oppenheim, Willsky, plus some random grad student(s), is designed to give a student who has the typical mathematical knowledge of a 2nd-year EE student a tool kit to do practical work. It does not, and can not, give a full mathematical grounding of why everything works.</p>
<p>In order to accomplish what it does it has to present a lot of the math as axiomatic, or it gives hand-wavy demonstrations (not proofs, by any means) of the underpinning math.</p>
<p>There's two things you can do with this:</p>
<ol>
<li>Accept that it works, and use the tools. Generations of engineers have founded highly successful careers on the Oppenheim, Willsky, and SRGS books. You can just accept the lack of underpinning and take what they say on trust, and base a career on that.</li>
<li>Study the applicable math, to the extent you can. This is what I've done, although the last "official" math class I took related to this was real analysis; the rest I've picked up in bits and bobs from graduate-level classes (signal processing and state-space linear systems), and from books.</li>
</ol>
<p>Really, if you're blocked because you don't feel that you can start until you know <em>everything</em>, you'll never start <em>at all</em>. Just dive in, start doing work, and start learning all the practical bits that don't show up in the hard math at all.</p>
| 531
|
spectral analysis
|
What is spectral whitening?
|
https://dsp.stackexchange.com/questions/10183/what-is-spectral-whitening
|
<p>What is meant by "spectral whitening" in DSP?</p>
<p>What effect does spectral whitening have when used in image processing? (visually or otherwise...)</p>
<p>Where might spectral whitening be useful in audio processing or analysis? What would a spectrally whitened audio signal sound like?</p>
|
<blockquote>
<p>What is meant by "spectral whitening" in DSP?</p>
</blockquote>
<p>Spectral whitening is usually an attempt to make the spectrum of the signal "more uniform". One reason this might be a good thing to do is that it can have the effect of making the autocorrelation of the signal "narrower" (and closer to a Kronecker delta, for discrete-time signals). This can help localize in time.</p>
<blockquote>
<p>What effect does spectral whitening have when used in image processing? (visually or otherwise...)</p>
</blockquote>
<p>It's generally not pretty. Most images are "low pass" (most of the information is in the low frequency part of the spectrum). One simplistic approach to whitening in images is to do a column-wise (or row-wise) difference (i.e. <code>diff</code> in matlab).</p>
<p>This will mean negative pixel values, which generally do not map to anything sensible with standard images.</p>
<p><a href="https://sites.google.com/site/kootsoop/prewhitening" rel="nofollow noreferrer">This example shows how prewhitening can improve localization in image processing template matching.</a> The picture from that link is:</p>
<p><img src="https://i.sstatic.net/bbdb1.jpg" alt="Localizing patch in an image, with and without prewhitening." /></p>
<blockquote>
<p>Where might spectral whitening be useful in audio processing or analysis?</p>
</blockquote>
<p>If you are trying to localize (in time) the onset of a sound, then it's possible that spectral whitening can improve this. It's also possible that it can reduce (disimprove) the SNR.</p>
<blockquote>
<p>What would a spectrally whitened audio signal sound like?</p>
</blockquote>
<p>For audio of speech or music, it will tend to bring in more higher frequencies.</p>
| 532
|
spectral analysis
|
Disadvantages of wavelet transform
|
https://dsp.stackexchange.com/questions/15148/disadvantages-of-wavelet-transform
|
<p>I have a question related to wavelet transform: we know that while the Fourier transform is good for a spectral analysis or which frequency components occurred in signal, it will not give information about at which time it happens. That's why the wavelet transform is suitable for the time-frequency analysis. It is also good for signal denoising, but of course it has some disadvantages.</p>
<p>So I would like to know what are main advantages of the wavelet transform? Is it good for spectral estimation; like finding amplitudes, frequencies and phases, or it just helps us to find discontinuous and irregularities of a signal?</p>
<p>Thanks in advance</p>
|
<p>If you consider the whole set of potential wavelet transforms, then you have a lot of flexibility. </p>
<p>For instance, should you use 1D continuous complex wavelet transforms, by analyzing the modulus and the phase of the scalogram, and provided you use well-chosen wavelets (potentially different for the analysis and the synthesis), and a proper discretization, you can:</p>
<ul>
<li>find discontinuities and irregularities of a signal and its derivatives <a href="https://i.sstatic.net/jg016.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jg016.jpg" alt="enter image description here"></a></li>
<li>find break point location by wavelet ridge extrapolation</li>
<li>denoise</li>
<li>perform matched filtering based on templates (with <a href="http://arxiv.org/abs/1108.4674" rel="nofollow noreferrer">complex continuous</a> or <a href="http://arxiv.org/abs/1405.1081" rel="nofollow noreferrer">discrete dual-tree wavelet</a> frames)</li>
<li><a href="http://www.scholarpedia.org/article/Wavelet-based_multifractal_analysis" rel="nofollow noreferrer">analyse (multi-)fractalty</a></li>
<li>analyse frequencies (with Gabor wavelets for instance)</li>
</ul>
<p>Due to the redundancy, and the quantity of available wavelets (not the same is best for different purposes), they could appear a little less efficient for the analysis of pure stationary and harmonics signals, for which Fourier is better suited.</p>
<p>The main drawbacks are:</p>
<ul>
<li>for fine analysis, it becomes computationaly intensive </li>
<li>its discretization, the discrete wavelet transform (comp. efficient), is less efficient and natural</li>
<li>it take some energy to invest in wavelets to become able to choose the proper ones for a specific purpose, and to implement it correcly.</li>
</ul>
| 533
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.